LabelLab is an image analyzing and classification platform. The web application should allow users to upload batches of images and classify them with labels. It will also have the features to run classifications against a trained model. LabelLab also has a user project management component as well as an image analyzing component.
My goal will be to finish up the details and missing functionality of the Boost.Real library created during the previous GSoC and get it to review ready state.
I will work to find WebAssembly based solution for missing mediaRecorder API for browsers like Safari and Edge. By using native mp3 encoder that will be used in the browser with the help of Emscripten compiled javascript output.
Fake news is polarizing peoples and our society in an adverse way. Its effect can be seen more and more as people’s reach to social media and to the internet is been increasing. Fake news is not only creating communal hatred but also, polarizing general elections.
Click-bait waste a lot of productive time of people. These headlines are written in a very catchy manner such that people are tempted to click these links and they don’t contain any relevant information. So it is necessary to warn a user about click-bait.
This is a Chrome extension to detect Fake News and click-bait news on news websites and social media like Facebook and Twitter.
My proposal is based on enhancements that would incorporate real world use cases into the project and testing and maintainability that will bring consistency in contribution, make reviewing and testing thorough
The goal involves enhancing Image Sequencer and making it rich by incorporating:
This project reviews some of the pretrained models that are available in other frameworks and projects like Keras and ONNX but currently unavailable in Tensorflow hub as a published module and proposes to export them to hub following all conventions. After porting to tensorflow hub the models will also be converted to tensorflow.js and released with guides and documentation.
Deploying a machine learning module using Swift for Tensorflow it to a mobile device, including building out necessary components of the Swift for TensorFlow and TFLite ecosystems.
Effects are a important and widely used feature of Pitivi and improving their UX will make Pitivi easier to use and lower its learning curve for newcomers. This will help in furthering Pitivi’s goal of “allowing everyone to express themselves through the art of film making”.
The objective is to build a web-based Honeypot project by identifying the emerging attacks against web applications and report them to the community, in order to facilitate protection against targeted attacks. With the help of ModSecurity, we lay HoneyTraps by adding more network ports that will accept HTTP request traffic.
Extend VolEsti (a c++ library with an R interface) by implementing randomized algorithms for convex optimization. First, there is a need to implement some more random walks so we can choose which to use with respect to our problem, e.g. ratio of dimensions / constraints. Such a choice is important, because the sampling is the bottleneck of the proposed algorithms, so it must be as efficient as possible. Second, use these sampling methods to solve convex optimization problems and the redundancy problem. I propose a variety of randomized algorithms, such as one that uses cutting planes and one that uses simulated annealing. In the end, I will provide support for linear programming, semidefinite programming and mixed integer linear programming. A good part of the project will be testing to provide empirical data, since some of these algorithms perform better in practice than predicted by theory.
Develop a simulator which will support all the features of SBML core specification
Integrate the REST API that is being added to upstream Roundup to bugs.python.org (b.p.o). Develop the following list of tools and features that employ the REST API:
Create a hardware/software design will incorporate shift registers to allow BB.org hardware to communicate with hardware via a parallel, bi-directional bus. Create a software design that incorporates both a kernel driver to communicate with the parallel bus and a user space application/library using ioctl() calls to communicate with this kernel driver. Using the PRU to implement the low-level details of the bus.
Multivariate Analysis techniques are indispensable in the era of Big Data. However, a unified and user-friendly framework has been lacking to date. Individual packages allow for certain special cases handled by MoMA, but for advanced cases, no standard packaged solution is available. The MoMA package will provide the first unified toolbox for all forms of high-dimensional multivariate analysis in R or any other language. MoMA will empower statisticians and data scientists to flexibly find patterns that respect the specific structure of their data and allow for truly Modern Multivariate Analysis.
OpenWorm is an attempt to build a biophysical simulation of the model organism C. elegans via assimilation of published data from a range of sources. Publication of biological data is often weakly structured and has no standardised format making assimilation challenging. Currently PyOpenWorm acts as a quick data access layer for researchers to query C. elegans anatomy and physiology. It currently incorporates data from the primary literature, WormAtlas and WormBase. This project would aim to expand the data available for model validation and for query in PyOpenWorm. Expanding the data available would aid hypothesis generation in the C.elegans research community, by allowing fast access to past observations and by facilitating the generation of more robust models through model validation tests such as SciUnit. However, ultimately it could also be a poster child to inspire an increase in standardisation and statistical accountability in the publication of biological research as a whole.
The Project idea aims at integrating a Library to generate QR code within LibreOffice itself and add options to use the QR code in LO applications.
QR Code will be generated for text or URL, QR Code can also be generated even for special(UTF) characters, The generated image will be in SVG Format, Better Rendering Quality, Scalable, QR Code will be generated as an Image so it is handled easily line an Image.
This project is of type Improving an existing tool that includes cleaning up the existing codebase and adding the process injection support for Linux.
Fineract-CN-Mobile is an Android app for Digital Financial services that is built on top of FIneract-CN platform. It provides banking solutions for people around the world who are unbacked. This app is for field officers who go to their customers and help them with their financial services.
I would like to improve the app by implementing the feature that is mentioned below which would improve the user experience, functionality and overall aesthetics of the app.
My project is about creating a complet, reusable library for graphs including the layouting algorithms, in Pharo. The idea is to leave it open for multiple visualization engines to use.
I would like to work implementing missing features in nftables. My plan is to work on the following subtasks: extending stateful object infrastructure, allow deletion of set elements from ruleset and rework netfilter logging.
Another task that catches my attention is working on nftlb. I think my professional experience with load balancers and inverse proxies can be useful to develop this task successfully.
The coreboot project is automatically scanned by Coverity, a free static-analysis tool provided by Synopsis to open source projects. This tool analyzes the source code to check for common mistakes and errors, including static buffer overruns, null pointer dereferences, integer overflow, and other suspicious code. The coreboot project currently has over 380 flagged Coverity issues. The goal of this project is to make coreboot "Coverity clean". All outstanding issues will be classified, invalid reports will be marked as false-positives, and valid ones will be patched. This will address all issues with the current codebase, and ensure a common baseline for triaging new issues in the future.
Port and Integrate DRM ioctl in NetBSD kernel for Linux binaries running in NetBSD. Convert between 32bit and 64bit DRM ioctl calls. Create a test suit to run and test Linux applications in NetBSD.
This project intends to provide users or Malware analysts a platform which can assist in analysis of Linux Based IoT malwares. As IoT malwares are a booming threat, and due to heterogeneity of Malware architectures, resulting which the instruction set, system calls and stuffs related gets completely different and difficult for an analyst to continue. This platform will try assisting the analyst to conclude malicious intentions using the help of rules and signatures. Apart from that, it will attempt de-compilation , carry out regular manual analysis through Radare2 and attempt to provide ESIL emulations of selected piece of disassembly. Moving on from Static analysis, IoT malware analysts face enormous trouble setting up environments for dynamic analysis due to varied system images, custom tool-chains and many more reasons. This platform attempts to provide a detailed Dynamic analysis report upon user demand using Cuckoo and SystemTap scripts(As of now).
A new topology toolbox to gvSIG desktop. This tool will provide a group of integrity rules that will check the validation of the geometries relationship in the data. A new topology data model can be created for each project. This toolbox provide a new set of tools to navigate, find and fix validation errors different from each topology rule. Right now, there are just a few topology rules implemented with a limited actions. This project will analize, implement and optimize a new set of rules that will be incorporated to this framework. This tools can be created in Java or in Jython through the Scripting composer tool.
Swift for TensorFlow is a Swift library that helps develop and train ML models. Data Visualization can be helpful when exploring a dataset. It helps in identifying patterns, corrupt data, outliers, etc. It helps in the qualitative understanding of data. But as of now, there is no way one can plot graphs natively using Swift that works cross-platform. We do have CorePlot, but it works only on macOS and iOS. This project aims to make a Data Visualization library (similar to matplotlib) for Swift that works on Linux, macOS and Windows as well.
LTSP (Linux Terminal Service Project) allows diskless workstations to be netbooted from a single server image, with centralized authentication and home directories. But the project shows its age; the initial thin-client focused design is no longer suitable for the netbooted fat client/wayland era, and it contains a lot of stale source code. This GSoC project is about designing and implementing a modern replacement of LTSP.
This project aims to provide the capability of the FreeBSD ports infrastructure to safely and cleanly build ports and all their dependencies without superuser privileges, jails, or touching the installed system in any way, in the interest of improving the safety, reliability, and repeatability of ports building without the administrative and resource overhead of a separated build host or jail.
The main goal of this project is to implement a mechanism to be in sync with the latest human data submitted to dbSNP. Once imported, this information can be distributed via EVA implementations of the GA4GH APIs htsget and Beacon specifications, as well as the EVA website.
Given a dbSNP FTP directory with the human variant information, the pipeline should parse JSONs for each chromosome and write the variants from the JSONs to the EVA archive
Passive acoustic observation of whales is an increasingly important tool for whale research. Accurately detecting whale sounds and correctly classifying them into corresponding whale pods are essential tasks, especially in the case when two or more species of whales vocalize in the same observed area. Most of the current tasks of whale sound detection and classification still need to be implemented manually.
We aim to develop two deep learning models for the detection and pod-classification of orca, or killer whale, calls in unknown long audio samples. These deep neural networks will help identify and verify killer whale calls so that researchers, grad students, and shipping vessels don't have to. The end-user interface can be made as a web-app which can easily be used by scientists in their research.
In the ten years since its creation, the Pharo programming environment has evolved greatly: it now has Github support and many cool tools; it’s faster, better and more convenient to use both for people who have been familiar with Smalltalk for decades and for complete newbies. However, there’s one thing that is behind in improvements -- code completion. And so having already had some experience with it, this is something I want to work on and improve.
As Eclipse 4diac allows programming of many different devices, such as Plcs, RaspberryPies, Lego Mindstorms, a runtime environment has to be compiled for each device. If you then want to extend the functionality with custom function blocks, a new runtime environment has to be compiled every time, a new function is created. However, compiling is complicated and takes time, therefore a Lua engine is integrated into the runtime environment, which then is able to execute Lua code, without having to compile it first.
Model zoo is a great compilation of deep learning and reinforcement learning algorithms. Currently state of the art baselines in terms of reinforcement learning and generative models are lacking in this package, which are present in tensorflow and pytorch. This project aims to add state of the art reinforcement learning algorithms like Proximal Policy Optimization and Trust Region Policy Optimization along with multi-modal translation and image captioning networks. These models are complex to implement and thus most users resort to standard tensorflow/pytorch implementations. Adding these to model zoo would attract a lot of researchers.
Hardware accelerators are necessary or at least desirable in many SDR systems. GNU Radio provides an open and free platform for designing real time SDR system. This proposal elaborates the architecture of Verilog design simulation integration, utilizing the existing tool, Verilator. With this Integration, Verilog simulation could be run at real-time, as a part of the SDR system.
Support for RTL (right-to-left) languages is still incomplete in Godot. While it supports Unicode fonts, adding proper support for RTL languages requires more functions to be implemented to be able to support languages like Arabic that require not only reversing the direction of typing but the cursive abjads (alphabets) of the Arabic script require different characters shapes depending on context. So, my project consists of building APIs and Implementing them in text controls to support correct layout of RTL text and to correctly handle BiDi text input.
Add to Cantor, a KDE program with similar functionality as Jupyter, feature to import/export Jupyter notebooks. It gives Cantor access to result of working Jupyter's community and solve the problem of sharing files Cantor users and Jupyter users. And also allow more easily migration from Jupyter to Cantor. Solving of mentioned problems could increase Cantor popularity
Usability and 3rd cutting-edge models are two main factors that are especially concerned by academic area. This project concentrate on enhancing both factors by extend the TensorFlow tutorial “A Concise Handbook of TensorFlow” (https://tf.wiki) to conform various features of TensorFlow 2.0, and develop a library containing clean, well-packaged and cutting-edge keras layers, starting from Graph Neural Networks and Memory Networks.
OSM2World is a converter that renders 3D models of the world, based on exported OpenStreetMap data. As the project is still under construction and some OSM tags styles are not yet supported, it is in the scope of this project to expand its codebase and include various aspects of traffic sign modules to its rendering capabilities.
This project aims at feature enhancements of Joomla 4. Easier module placement, adding information of the selected menu item in the menu overview list and the integration of the cookie consent plugin are the feature enhancements to be done for Joomla 4. These feature enhancements will make user experience more smoother and seamless.
HsYAML is a pure Haskell idiomatic implementation of the YAML 1.2 data serialization language with a strong emphasis on compliance with the YAML 1.2 specification.
We know that HsYAML library is already successfully in use still, there is lots of room for improvements. I am planning to work on the following features:
Implement YAML pipeline for dumping/emitting YAML
Extend data-model to allow for load/dump round-tripping while preserving ordering, anchors, and comments
The project is about implementing the computing extensive FindSim program on a cloud server and building communication between the web server and the computation server. With a high-performance server specifically set up for computing simulation tasks, the system can process more data with less time and the web server can provide better web service.
Implementation of core features needed for XR (virtual, augmented and mixed reality) headsets via the new OpenXR specification (currently in provisional state).
While virtual reality has reached mainstream adoption, it is still not possible to use this technology directly within Blender. There is great potential in the new XR workflows however. This project aims to introduce the core features necessary to build immersive experiences in the Blender viewport through OpenXR. Namely this concerns rendering and interaction with head-mounted displays (HMDs).
Molecular docking—the prediction of binding modes and binding affinity of a molecule to a target of known structure—is a great computational tool for structure-based drug design. However, docking scoring functions are mostly empirical or knowledge-based and the flexibility of the receptor is completely neglected in most docking studies. Recent advances in the field showed that scoring functions can be effectively learnt by convolutional neural networks (CNNs). Here we want to build on top of these findings and develop a CNN scoring function for flexible docking by extending the capabilities of gnina—a state-of-the-art deep learning framework for molecular docking—and by building an high-quality training dataset for flexible docking.
368/5000 About the project. The idea is to stabilize the project in python 3, perform unit tests (to test the migration to python3), add continuous integration (run the tests in each commit) and automated deployment (to assemble the packages and upload them to PyPi automatically). They are all things that would facilitate the development and installation of the project.
NpChat is a photo and file sharing application built on Android and is inspired by Snapchat. It runs over the Named Data Network (NDN) and focuses on decentralised information sharing architecture. It stands as one of the best examples of Android application developed on the new Internet architecture. During the GSoC period, I intend to develop this application from a working prototype to a finished application, with its first version released on the Google Play Store. Given the current state of the app, being a prototype, gives a big room for changes and improvements to be done in the coding period.
NextCloudPi (NCP) is a preinstalled and preconfigured Nextcloud, which can be used as cloud service to self host securely private data. NCP is open souce (you can find the code on Github) and it’s an official project of Nextcloud which is maintained by the community. The proposal describes the automatic installation of several useful and widely used open source applications on NCP, such as the online office suites, and the implementation of some new features and improvements on the existing Nextcloud server and web panel in order to make NCP more functional and useful both to novice and advanced users.
ThreadScope is a GUI tool for viewing and analyzing eventlogs to help developers understand and debug the behavior of concurrent Haskell programs. Currently, ThreadScope is inefficient when processing large eventlogs, makes it unusable for long-running programs. However, profiling such programs is of vital importance for analyzing real-world Haskell applications. This project will improve ThreadScope by loading and analyzing eventlogs in chunks to improve memory efficiency, making it usable for analyzing large eventlogs.
I am interested in the project “Appstore for the Cytoscape App” with Alex Pico, Barry Demchak and Scooter Morris as its mentor. My goals for the summer will be to improve overall user experience, security as well as accessibility of the Cytoscape App store by implementing following major goals:
For tedana to expand, more extensive test cases are needed. A new contributor could have introduced an unknown bug into the codebase, but without extensive testing, there is no way to find out until a code review (and even then, it may slip through!) It would be wise to have vast and wide-ranging tests that catch the bug for the contributor!
For this project, I will write detailed unit test cases for the tedana codebase, hitting as many lines as I can while testing! I will also make changes to the functions themselves to type check the input parameters!
The OpenCV's DNN Module allows us to run inference on a pre-trained Deep Neural Network in order to accomplish high end vision tasks with just a few lines of code. However OpenCV's model zoo needs some new additions. Models that have acquired State-of-the-art (SOTA) in different computer vision tasks are somewhat lacking in the model family that is currently listed in the OpenCV repository. This gives rise to the necessity of adding new, powerful models to the list. The goal of this project is to curate more models for ease of use by the OpenCV DNN module and put them in a place where they can be easily accessed such as LFS on Git. The proposed project will add six (potentially nine) new models to the OpenCV Model Zoo. All of the selected models have, either state-of-the-art results in the task they perform or belong to a task category not currently present in the model zoo (e.g. Image Generation). The proposed workflow and the established timeline take into account worst-case scenarios, guaranteeing the project's completion and minimizing risks.
With booming amount of information being continuously added to the internet, organising the facts and serving this information to the users becomes a very difficult task. Currently DBpedia hosts billions of such data points and corresponding relations in the RDF format. RDF is a directed, labeled graph data format for representing information in the Web. SPARQL is a query language for RDF.
Extracting data requires a query to be made in SPARQL and the response to the query is a link that contains the information pertaining to the answer or the answer itself. Accessing such data is difficult for a lay user, who does not know how to write a query. This project will try to make this humongous linked data available to a larger user base in their natural languages(now restricted to English). The primary objective of the project being to be able to translate any natural language(English) question to valid SPARQL query.
Inclusion of USER_LDT option in NetBSD kernel allows Wine32 applications to be run on amd64 architecture. This is useful primarily owing to the majority of Windows applications being 32-bit. My work involves getting the latest version of Wine to compile and run on i386 and amd64, and building Wine with WoW64 support which will allow running Wine32 applications on amd64.
My proposal is a pirate-themed roguelike videogame that will showcase how powerful Pocket Code really is, while also delivering a fun and exciting experience for the player. The player will find himself helping cities, meeting colourful characters and fighting other pirates in a randomly generated adventure, and will face different challenges every time!
This project is to automate Reinforcement process by using Rebar Addon in FreeCAD. The idea is to create UI on top of the current implementation to combine different types of Rebars in a single Dialog Box as per the user requirements. For example, combine Stirrups and Rebars (different types of rebars) in case of Beam Reinforcement.
Improve the quality and reliability of the FEM Workbench of FreeCAD by introducing a extensive and consistent testing approach and developing a test suite covering the whole FEM module.
This project MapKnitter's Rails version from 3.2 to the latest stable version. Along with this, I will also be:
The aim of this project is to build a compliance framework, to ensure that all new implementations/products/APIs adhere to the central specification defined by GA4GH. This framework would consist of pre-defined tests which can be reused for testing any new products for compliance during the product approval process, thereby reducing the cost of writing tests again and again. The framework would also include meaningful logging and a final report generation. The final reports would be published on a public website.
We will implement a MathOptInterface feature that allows JuMP to take an optimization problem in its primal form and return the dual form in terms of Lagrangian duality. The primary project goal is that this feature should work with linear optimization problems and conic optimization problems that can be written in pure JuMP (JuMP extensions will be a bonus).
The project does not depend on any other Julia packages; it should be a pure implementation that builds on top of the existing MathOptInterface.jl package.
The main project benefits are:
These two benefits are strong building blocks for the academic optimization community. They should bring long-term benefits to research in optimization.
Prombench the benchmarking tool for Prometheus will be extended to support even more tests, newer components, metrics which will help both developers and users in terms of identifying bugs and scalability tests. Another task the proposal aims to solve is the longstanding issue of Prometheus rule formatting.
Carbon Footprint Android App is currently in its beta testing phase, my main goal for this summer will be to introduce some awesome big features like Heatmap, Leaderboard and Push Notification alongside some small but yet effective features like Client-Side Password Validation, Autocomplete and Remove Friend Option. Additionally, improving UI/UX of the app and improving the current code structure is also an important task. Parallelly, the iOS version of the app will be maintained up-to-date with the Android version.
Often a low resolution image is needed to be up-scaled to a higher resolution. Even though there are already a lot of interpolation methods which accomplish that, none of them actually make the image look better and natural to the human eye.
Super resolution methods output an up-scaled image with improved details. The essentially unknown details in the high resolution output are filled in using different SR techniques.
For this project, I will focus on deep learning based SR using, mainly, EDSR and ProGANSR. I will train and model these architectures with PyTorch and convert them to ONNX representations which the DNN module (OpenVINO) of libXcam can infer from.
DIRAC is an open source interware platform whose roles are submission of jobs, the management of the data produced, to the orchestration of the distributed resources. It is a generic software, used and extended by several Virtual Organizations (VO).
It monitors resources like computing, storage, catalogue resources, networks, information providers, file transfer services, message queues or databases services. Members of a VO can use a mask composed of services exposed by local resources. Experienced Grid administrators apply procedures for managing such services, based on their status, as it is reported by an ever-growing set of monitoring tools. When a procedure is agreed and well-exercised, a formal policy could be derived.
For this reason, using the DIRAC framework is developed as a policy system that can enforce management and operational policies, in a VO-specific fashion.
This project proposal offers :
Currently, beam elements (much like most of the deformable structural components) only deal with the (visco-)elastic characteristic properties, whereas the modeling of the inertia properties of the component are delegated to a set of equivalent "rigid bodies", i.e. lumped inertia components. This project is aimed at introducing consistent inertia modeling for beam elements, i.e. inertia forces and moments are generated directly within the beam element structural component, starting from beam section inertia properties per unit span.
Open event project provides a platform for managing all the activities in an event. It has different mobile applications for both attendees and organizers. My proposal aims at enhancing the attendee app by providing more event information, integrating more payment methods and improving the registration process. In the current state of the app, all features are implemented according to the previous version of eventyay server. My goal is to integrate new features which are implemented (or will implement during GSoC) in the frontend (https://next.eventyay.com) and the server (https://api.eventyay.com/).
The R package highfrequency is the go-to package for intraday financial analysis in R. In the project, I will enhance its functionalities, rework some of the foundations and expand its documentation.
Use snapshots (shallow copies of the document) instead of actions for the purposes of undo and redo in Krita. This simplifies the system and makes it easier to maintain.
digiKam is a well-known desktop application for photos management. In digiKam, tags on photos are strongly supported for the sake of providing users with a natural workflow of searching and arranging photos in their collections. Since many of our photos contain faces, face tag has apparently emerged as an essential property for any photos management software. Being aware of that, digiKam team has put a lot of efforts to develop face engine, which scan scan photos and suggest face tags automatically basing on pre-tagged photos by users. However, that functionality is currently deactivated in digiKam, as it is slow while not adequately accurate. Thus, this project aims to improve the performance and accuracy of facial recognition in digiKam, in order to bring this wonderful functionality back to users in a very soon release.
This is the proposal with respect to the task I'll be pursuing in the GSOC 2019. In this document I've briefly explained about the task I'll be doing like OIDC, Edit contact improvement, Adding several accounts, etc.
An application from Jingrui for Céu-Arduino, I'll do my best for that.
This proposal suggests a full-featured modern user mentions feature with additional improvements for Apache Allura. User will be able to mention other users in a comment with the help of an autocomplete list. Thereafter, notification emails will be sent to related users. Furthermore, there will be an option for users to turn on/off notifications as they wish.
Ganga is used to execute a user defined computational task on a distributed back-end. Through this project we let the users define the environment in which their task need to be executed. So the worker node will pull the user defined container and execute the task on it.
Terasology is designed to be a modular voxel engine but as said: “with great power comes great responsibility”. Keeping track of all the modules can be tough. The idea of the proposal is to provide an automation system that aggregates all the modules and display their information on a generated website. Provide a way for the user to download, list dependencies. With a stretch-goal of improving the UX of the website along with speed and accessibility.
Bassa is currently limited to being an internet download manager, providing efficient use of internet bandwidth. The vision of the project is to scale it to a full-fledged file management platform involving file storage (on the server), uploads, downloads and collaboration among users via sharing resources (files, in this case), much like Google drive or One Drive.
The project is aimed to design and implement the JSON API to programmatically query the BookBrainz database. Since the existing backend of BookBrainz website is written using node.js and its framework express.js, therefore we will also use the node.js and express.js to implement this project. This API will use the current BookBrainz ORM bookbrainz-data-js to access the database.
I propose to extract and analyse the Body Gesture and Pose of the people depicted in the paintings, artwork and sculptures of the medieval period.I also propose to interpreted the scenes depicted in the paintings like time of the day,indoor or outdoor,well decorated room or cottage. Along with that, I will also analyse the Christian Iconography and the interaction.I will extract the key-points from the images for human pose estimation, by detecting and localizing the major parts/joints of the body in the image containing individuals. I will also extract emotion depicted in the images using face keypoint detection.I will also train a classifier to predict emotion on the basis of pose.I will also visualise the clusters of images based on emotion and pose.All techniques proposed here have been applied previously on related problems in image processing.
Poliastro aims to be an open source library for aiding Astrodynamics and Orbital Mechanics. However, a lack of interactive visualization tools is apparent. My proposal aims to build a module to extract orbital data from poliastro and visualize them using the open source 3d mapping application, Cesium.
With the coming revival of bit utilities paper for the C++ Standard and the potential of a new suite of bit utilities coming from a header <bit> [5], the goal of this Google Summer of Code 2019 project will be to identify existing algorithms where libstdc++ will benefit from additional overloads based on using the bit iterators. This proposal also explores the fundamental appeal of broadening this class of optimizations to types that are not only represented by bit iterators or std::vector<bool>, but any type whose bits are trivially relocatable under the upcoming work of Arthur O’Dwyer.
OWASP Honeypot : The idea is to: -Test all the modules in the code (currently 4)and if there are bugs found to fix them. -After testing the entire codebase, would remove the duplicated code. -Then I would like to work on packet tracing,currently only the information like IP Address and Country is being extracted from the packets and it's not that useful to classify it as high risk or not.So I would discuss with my mentor as to what all needs to be extracted from the packets and would store in the database.
The aim of this project is to carry out a rigorous and deep case study of Index Checker by annotating Apache Commons Lang with the same. The plan of action is to annotate, find and report bugs(if any) and provide enhancement suggestions, including improvement in the error messages. If there is time left, I'd repeat the above procedure with another tool.
It is better to discover performance problems sooner than later, but this is even more important if we are talking about a database. The newly introduced performance flaws are hard to notice, and the process of discovering them is cumbersome if done manually. Since Prometheus TSDB does not have such a feature yet, this project should be the solution. Meaning, the plan is to develop detailed performance tests and automate the process of testing using Prow, the Kubernetes based CI/CD system with GitHub integration. Moreover, for easy analysis, the results of the benchmarked pull request will be compared against the master branch test results. Fortunately, the foundation for implementing the benchmarks partially exists, and so do some benchmarking tests, which means an excellent start for the project.
The sequencing technology produce the DNA sequence information of tumor cells. And cBioPortal is one of the leading web-tools which collected tumor sequencing result and provides visualization and exploring function. The tumor is not homogeneity, the mutations were accumulated when the cell divide. The different parts of a tumor can be the clone of different cells with obvious genetic distance respectively. This project aiming to construct a front end module to estimate the clonal evolutionary relationship using multi-region sequencing data. TBD
This project focuses on creating an awe-inspiring game using pocket code with the goal of sending the message on “ what kind of unimaginable feats can be achieved using pocket code”. In order to realize this, the game being built will be inspired by the recently popular 2D title - “The binding of Isaac(BoI)”. Whilst certain characters and parts of the storyline will be the same as the BoI, I am also looking forward to implementing original cutscenes, characters, monsters, weapons as well as different endings and a seperate set of features varying from the original game.
Successful completion of this project will result in inspiring other audiences to use pocket code to create more apps, further enforcing the vision, “Computational Thinking for All with Free Visual Coding Apps”.
This project proposes to build an infrastructure that helps the R community explore R user groups, R-Ladies groups and past R-GSoC projects using a data-driven approach to render dashboards that summarize trends and insights to enhance better decision making for R centered organizations, the R Foundation, and the general community.
Training of High end Deep Learning Models takes a Long time to Train, Hours, Days and Even Weeks. So, as a Developer/researchers it slows down the creativity and implementation cycle. The Whole Idea of Transfer Learning provides a nice solution where a model trained on a similar dataset can be trained partially or fully, on the custom dataset, yielding similar result. TFHub is a platform by Tensorflow where pretrained models are shared, so that developers and researchers can use transfer learning on these models. In this Project, The Following Pretrained Models and Their Demos will be added to TFHub:
The goal of this project is to implement encoding and decoding of GB2312 for perl6.
Neovim’s job control feature introduced great multitasking support that added new capabilities and enhanced a lot of important tasks (e.g.: online linting, like in ALE for instance), making them asynchronous and non-blocking. This improved the editing experience and reduced latency.
Here I propose to work on adding multiprocessing support to VimL and Lua scripts by allowing functions to be invoked asynchronously in separate processes and enabling the retrieval of their results either through callbacks and/or by waiting for them on demand in the parent process.
This multiprocessing feature will have a noticeable positive impact on the performance of tasks that can do their computations in a parallel manner or that may want to spawn workers to process some data or to do some offside work while doing their main job.
Bayesian Additive Regression Trees (BART) is a Bayesian nonparametric approach to estimating functions using regression trees. A BART model consist on a sum of regression trees with (homoskedastic) normal additive noise. Regression trees are defined by recursively partitioning the input space, and defining a local model in each resulting region of input space in order to approximate some unknown function. BARTs are useful and flexible model to capture interactions and non-linearities and have been proved useful tools for variable selection.
This project aims to improve the existing Nextcloud integration within Rocket.Chat. Seamless authentication and improving the UI and UX of the file picker are the main focus.
OpenMRS needs location based restrictions like privilege level access restrictions. So the Location based access control(LBAC) should provide a restriction mechanism to restrict user access only to the logged in location of the authenticated user. That way, if someone is logged in a certain location, they should see only those encounters, observations, and patients registered in that location. But the System Developer account should be able to see patients in all locations.
CrowdAlert practically addresses a really good problem we are trying build a solution for. It's solution boils down to having a robust and real-time end user experience.
This proposal is about core optimisations and addition of new and exciting features to the user experience of the React application. That implies, using Server-side Rendering for faster page loading times (significantly low Time To First Byte), using web-sockets for real-time updates for incidents, comments and HTTP Long Polling for upvotes etc., migrating to more robust, scalable and available database solution, finally, decoupling it from Django views. Then we need to write consistent tests to validate different models and fields. Finally, an NFSW image classifier. Eventually, we will end up with cleaner code, sound application architecture of both frontend and backend, adding more tests and developing/revamping new features are the most obvious ones. This will not only make developing new features much more easier, cleaner and maintainable but will also dramatically improve user experience and take it to next level.
Firefox Profiler is a tool which visualizes performance data recorded from various performance analysis tools, which helps us to have an insight into our app’s responsiveness, Javascript and layout performance. By using it efficiently we can optimize our app in terms of performance. Currently, it supports Gecko, chrome, and perf(Linux) profile formats. This project will provide support for visualizing profiles from Instruments which is a powerful and flexible performance-analysis and testing tool that’s part of the Xcode toolset.
GStreamer plugins are written in C and the developers are pursuing in an attempt to convert them to Rust which is more robust and easy to maintain. I will be a part of this conversion and help to fix certain issues related to this. This may require lot of testing and careful implementation of the C written program in Rust.
Condition list is a list of diagnoses, symptoms, or findings that are being tracked over time (i.e. across encounters)
JuliaText is the JuliaLang organization that provides with packages to work with text. It currently lacks support for basic problems like Named Entity Recognition, Part-of-Speech Tagging, Dependency Parsing etc. which help serves as the basis for various language processing problems and analysing text.
I propose to implement practical models for Named Entity Recognition and Part-of-Speech Tagging in Julia and extensively test and validate them. Robust and well-tested APIs for these two tasks will be written.
Software Heritage is an ambitious research project whose goal is to collect, preserve in the very long term, and share the whole publicly accessible Free/Open Source Software (FOSS) in source code form.
The Software Heritage data model is a big Merkle DAG made of nodes like revisions, releases, directories, etc. It is a very big graph, with ~10 B nodes and ~100 B edges, which makes it hard to fit in memory using naive approaches. Graph compression techniques have been successfully used to compress the Web graph (which is slightly larger than the Software Heritage one) and make it fit in memory. The goal of this GSoC is review existing graph compression techniques and apply the most appropriate one to the Software Heritage case, enabling in-memory processing of its Merkle DAG.
A library for creating data structure with same functionality as DataFrame in R or pandas
This project aims at introducing new features to detection suite like adjustable bounding boxes, ability to add new labels , draw bounding boxes around the object of interest to get its labels and develop a ros-node of the suite. And also on improving the user interactions by adding few new GUI features like playback options , class-labels work-space and easy to start interface.
Current version of Remoting over Apache Kafka plugin requires users to manually configure the entire system which includes Zookeeper, Kafka and remoting agents. It also doesn't support dynamic agent provisioning so scalability is harder to achieve. This project aims to solve two problems:
This project focuses on improving Open Robotic's simulators through creating a Gazebo Visual Plugin and Sensor Data Visualization. This will involve implementing a GUI that will overlay Gazebo that will display important information including current task, number of points, penalties, and useful debugging information. In addition, this will involve writing loadable plugins for sensor data visualization using Ignition Rendering.
The 2.1 SPDX specification source files exist in markdown and now generates an HTML version The goal in this GSoC will be to find an approach to generate both HTML and PDF versions of the SPDX Specification from Markdown (MD) such that changes to the source files(MD) in this repository are automatically reflected in both versions.
Sampling algorithms and volume computation of convex polytopes are very useful in many scientific fields and applications. The package volesti is a c++ software with an R interface being the first package providing such a variety of options in geometric statistics. It currently scales to a few hundreds dimensions and hence it could be an essential tool for a quite large number of scientific applications. However, the possibility to scale from a few hundred to a few thousand dimensions was considered as a very far-reaching goal for many years. The goal of this project is to provide the first ever implementations for sampling from convex polytopes and volume computation in a few thousand dimensions. We exploit some very recent theoretical results that guarantee fast convergence and numerical stability in order to propose an efficient implementation of the current state-of-the-art geometrical random walk algorithms. The proposed implementations will be a decisive contribution to other scientific fields as computational geometry, finance and optimization. We give a week time schedule. We hope this project will help educational programs, research or even business communities.
In the current scenario, A user logs in the desired intermine and saves the results and the required data. The problem arises when the same user wants to access a different intermine, He/She will have to register again on this new mine and log in again. So , Currently Intermine community does not have a single common sign-in mechanism and thus it is authenticating users with the help of tokens (temporary and permanent one) or using google and Elixir services to log in. This project will modify the existing token mechanism by making the intermine as an OAuth2 provider with a single common Authorization server for all 30 mines so that user could access all the mines with the single set of credentials i.e just one time registration.
The Project has 3 sub-parts which are described as:
A common sight on today’s streets is the number of abandoned animals languishing on the streets suffering with injuries and disease and live without a any shelter. The silent suffering of these souls caught the attention of dynamic, animal lover, so Animal Rescue App initiative to develop a mobile app to track animals that are in need of help. Animal Rescue App Connect animal lovers, vets, and other NGOs in real time. They Can Track the Animals That Actually Need the Help. The Mobile Application will Implement From the React native using the Components of the Go social and firebase and the admin Website from the React js and firebase.
ROOT has several features which interact with libraries and require implicit header inclusion. These headers are often immutable, and reparsing is redundant. C++ Modules are designed to minimize the reparsing of the same header content by providing an efficient on-disk representation of C++ code. Although C++ modules support in ROOT has been implemented in the last few years, there is still room for performance improvement, and GlobalModuleIndex implementation is one such possible solution. It is a mechanism to create the table of symbols and PCM names so that ROOT will be able to load a corresponding library when a symbol lookup failed. It is expected to improve ROOT’s performance by speeding up its startup time.
For the past several years 360° cameras have become far more accessible, and several video-sharing services enabled support for 360° videos (for example, YouTube and Facebook). Nevertheless, FFmpeg currently doesn’t support any of 360° video formats.
The goal of the project is to write a filter which will be supporting all 360° video formats found on the Internet, as output and input. A filter should also support all basic 360° transformations: rotations (roll/yaw/pitch), Field of View extraction, etc.
Bassa project is about making installation of Bassa easier for its users and containerizing in a better way to use available resources in efficient way.
Bringing Minio into Bassa for proper file storage will be first task, later on currently written Bassa installation scripts will be updated by proper and efficient DockerFile for each Bassa components (minio storage, MySQL DB, (Bassa socket server + aria2c) ) which will be treated as Docker containers and then these Docker containers will run in pods as described briefly in proposal and these pods will be run as clusters in k8s environment
Bassa socket server and aria2c together will work as single instance pod and will be auto scaled to attain good uptime and other components Minio and MySQL will be working on different pods, these pods will interact with each other and transfer files to minio buckets created by user.
This project will improve and extend Dynamic Learning, a project created by Jithin KS as part of his 2018 Google Summer of Code project. Dynamic Learning is a platform in which teachers and programmers can collaborate with one another to create visualizations of common STEM topics. My project will improve this application by focusing on three major areas of extension: interface changes and responsiveness, integration with other software, and classroom usability improvements. I will also add a few miscellaneous improvements towards the end of the project.
Writing the driver of AD5940 in the IIO subsystem and add support for manipulating ADC channels, DAC channels, internal temperature sensor and GPIO lines.
Planned features: [OPTIONAL] Scheduler Jobs UI. Continued refactoring and performance improvements. Improve UI and workflows. Implement TOTP based two-factor authentication. Improve and stabilize GIS functionality. Improve KYC features and client onboarding workflow. Implement in-app push notifications via SMS Campaign API (Direct, Scheduled, Triggered). Design and implement UI for Maker-Checker (tasks) list. Improve UI for viewing reports and support exporting to CSV. Improve the existing UI and workflow for survey inputs. Unit and integration testing.
Finding the best example of full-stack Haskell web development project is hard as a beginner to intermediate Haskell developer. This is based on my experience when learning Haskell for web development. Moreover, many misinformed statement regarding how to do web development using Haskell especially about the library, tooling, and architecture to use leads to make a Haskell newcomer afraid to express their idea in web project using the language.
Hackage Matrix Builder is one of the example project which planned to be a full-stack functional programming project that can help beginner to see how web based project looks like using Haskell. However, it is not entirely use Haskell and it sometimes make it hard for newcomer to learn two language at the same time for their example project. In addition, Hackage Matrix Builder is one of the project that aims to provide the best build compatibility test service to the Haskell community for all the packages published on Hackage.
In this project, I am aiming to make a full-stack Haskell web development project example as well as adding UI feature in Hackage Matrix Builder to maximize its purpose as QA or CI service for everyone.
Currently RetroShare Android app is in the early alpha stage with basic functionality written in QML. Although the core functionality is implemented, its unappealing design makes this app unusable and discourages potential users to join RetroShare network.
New app is intended to make communication over RetroShare possible on Android. Users will gain additional access to RetroShare network through their phones, and by focusing only on the core functionality it will make joining network simple and entertaining.
The second part of the work is to extend the backend of the chat with the function of asynchronous messaging and storing the history of messages.
Migration of official TensorFlow models to use/support TF v2.0 features/functions.
Bricks are the fundamental tools, which are needed for the development of projects within the Pocket Code app. Therefore, I would like to contribute by implementing the missing bricks for Catrobat Language Version 0.992, which would implement essential features that are missing in the current app store version. These missing bricks are, for example, the GoToBrick, the WhenBackgroundChangesBrick or the CloneBrick, which are extremely useful, if not essential, and already implemented in the Android version of the app. These improvements would raise the functionality and quality of the iOS app to a new level, which ensure an even better version of the app than ever before.
This task focuses on improving DevOps for Carbon Footprint Web Extension along with adding new features so that new users can benefit from the extension :
Integrating Web Extension tests (E2E testing) : Currently only core functions are tested using unit testing, students are supposed to use puppeteer for testing the whole extension. In this way, the whole extension will be tested before deploying/commiting helping us to keep the extension more stable. Student will have to write tests for all the websites currently supported by extension along with tests for the websites which will be added during GSoC project.
Extension support for cruise: Currently, the extension only supports the calculation of carbon emission from airplane, train and cars. Students are supposed to integrate cruise calculation to the extension too. This would help more people who use extension to calculate the carbon emission. Calculation of carbon emission of cruise is significantly different when compared to aircrafts, cars and trains. In your proposal clearly explain your approach and give links to relevant documents.
One important characteristic for microbiome data is the number of microbiome is far larger than small sample size (n>>p), resulting in a high-dimensional problem. In this proposal, we will develop a deep learning prediction method ”Tree-regularized convolutional Neural Network,” (tCNN) for microbiome-based prediction. The advantage of tCNN is that it uses the convolutional kernel to capture the signals of microbiome species with a close evolutionary relationship in a local receptive field. Moreover, tCNN uses the different convolutional layer to capture different taxonomic rank (e.g. species, genus, family, etc). Together, the convolutional layers with its built-in convolutional kernels capture microbiome signals at different taxonomic levels while encouraging local smoothing induced by the phylogenetic tree. ‘tCNN’ is implemented in user-friendly R package based on TensorFlow.
Super Resolution is a subset of algorithms that aim to up-sample a lower quality image to a higher quality one. It’s goal is to create an up-sampled copy that is as detailed and visually pleasing as possible. It is used in a wide range of fields, such as medical image processing or surveillance camera stream processing. Using deep learning models for super resolution is a widely researched area, as they generally achieve better accuracy than classical computer vision based algorithms. There are many popular type of models used, ranging from supervised learning to unsupervised learning methods. I propose to implement two deep learning based models to be a part of the OpenCV library. One is EDSR, which is a residual network based model, and is known for it’s high accuracy. The other one is LapSRN, which is a fast, but still accurate model, that can be deployed to real-time applications. So integrating these two OpenCV would obtain a model that achieves state-of-the-art accuracy, and another one that can be deployed on devices that have lower computational power, while still maintaining high accuracy.
An ‘English-Lingala’ language pair using Apertium rule-based machine translation system.
Parameter estimation and uncertainty propagation are salient aspects of applied dynamical systems of practical interest. Both parameter estimation and uncertainty propagation are handled elegantly in the Bayesian framework, and made easy by PyMC3. Current numerical integrators in the scipy ecosystem do not return gradients with respect to the parameters of the system’s solution, preventing PyMC3 from using these integrators in the No U-Turn Sampler. This project seeks to implement methods for computing gradients for use in PyMC3’s MCMC capabilities, thereby allowing applied researchers to analyze their systems in the Bayesian framework through PyMC3.
An engine in C++ to run the Mission Supernova 2 game on ScummVM.
The aim of this project is to extend the current Synthesis State Watcher to additional low level waveform in different stage of producing note during Add synth, Sub synth etc. In additional, an oscilloscope-like view will also be implement in the UI, this will provide user the opportunity to see live progression of the intermediate waveform after filter, before mixing with another note etc. Allowing the user to see this them to understand how different parameter affect waveform of the sound and result in better music!
Laying down a strong foundation & base architecture for Intrusion detection & prevention system (IDS/IPS), intelligent log monitoring, antivirus that can be scaled in future and also can be further expanded easily by applying machine learning. Enhance the current firewall by bringing in some advanced rules to detect malformed & suspicious packets & dump them into a PCAP file for future forensic analysis. Implement OSINT tools to collect information about attackers and generate a CSV report. Introduce Auto Server patcher to patch server configuration for maximum security features and implement a server-side web deface detection. Also, protect IoT devices by checking if they are under the Shodan radar. Perform all the elemental connections & introduce different modes for the user. Improve the GUI by adding all the configurations options and critical data such as last login to it. Perform bug fixes and improve the dashboard. Write detailed documentation and README, and finally package and ship SecureTea version 1.1 on PyPi.
My goal is to rewrite the current lightmapper used in Godot in order to add some quality improvements and eventually allow it to use the GPU to speed up baking times.
I will explore what ray tracing libraries are openly available and choose the most fitting one. Finally, I will integrate a denoising algorithm to achieve better looking results with a reduced amount of samples.
This project focuses on making the integration of Application providers with Owncloud, vendor and platform independent. This will give the flexibility to the users to choose application according to their needs.
This project aims to create new API to enable multiple network interface for Rook storage providers. Currently, Rook providers only choice is to use hostNetwork or not. The new API will be used to define networks resource for Rook clusters. Rook operators will be able to consume those definitions and manage them. Therefore, it enables more fine-grained control over storage providers network access.
Merge conflicts are part of every version control system structure. There can be situations in which some changes are necessary for a piece of software to function properly but, an unexpected merge conflict may lag the development workflow. The user may want to store a partially resolved merge-state if they have to fix an urgent bug. In the current scenario, they are only allowed to either fully resolve conflicts or abort the operation that led to conflicts by discarding the partially resolved state with conflicts. This project is about adding functionality to store an unresolved merge-state to help the user on such occasions. This lets the user do the required tasks at the moment in the same repository and get back later to the same merge-state and resume resolving conflicts. This project also adds functionality to let someone else do the conflict resolution for the user if they want by committing the conflicts and sharing it with other users.
Librecores provides a platform to share projects and ideas, in the area of free and open source digital hardware design. Librecores CI is an approach/service to provide continuous integration to hardware projects hosted on Librecores to improve user experience and reliability. This project aims to provide automation service for some hardware projects that have a constantly evolving code. Jenkins, the automation server will be used to achieve the goals of the project.
Some functionality in SunPy or in affiliated packages is going to need access to data files on remote (HTTP) servers. Examples of this include data provided by instrument teams relating to the calibrations or performance of the instruments, these kind of data are highly likely to change with time.
This project needs to be designed and implemented under the assumptions that SunPy has no control over the data on these servers, and that files on the servers may be replaced with different files with the same name.
SymPy is a Python library for symbolic mathematics. Sympy has a powerful solve function that can solve a lot of equations, but due to its complex API and inability to give efficient output, solveset was implemented and is under development since 2014. A lot of work is needed to complete solveset. For transolve, Lambert solver needs to be completed. Handling modular equations will also be added. Proper use of Decomposition and Rewriting principle also needs to be implemented to solve nested trigonometric equations. Enhancing the set infrastructure to give simplified output for trigonometric equations is also needed along with integrating helper solvers with solveset.
The primary objective of the project is to generate both HTML and PDF versions of the SPDX Specification from markdown. The HTML and PDF version will be generated for each draft and release version of the specification. A tool which would help in easy circulation of the the SPDX specifications by creating a HTML and PDF version of the same. Conversion is important for the resource sharing capability of the specifications.
After establishing a stable IKEv2/IPsec connection there are chances of connection failure due to poor network connectivity or user going to standby mode etc. Suppose we have a VPN gateway where it has a large number of connections with remote clients failure recovery after a network loss leads to large computational needs as it needs to establish all the client connections again by performing all SA protocols from the phase 1 of IKEv2. In order to avoid this overhead to the systems an effective session resumption utility is required. This proposal focuses on the way to implement and add this functionality to the existing IKEv2 of the Libreswan Project.
Apache Camel focuses on making integration easier by providing the implementation of Enterprise Integration Patterns (EIPs), great API connectivity and easy to use Domain Specific Language to combine and transfer EIPs. GraphQL component in Apache Camel will act as a Query Language as well as middleware between the clients and server.
Currently, ChainKeeper is working as a web application which can use for the retrieve blockchain data for any kind of purposes. And it has an inbuilt API to do this. But as I understand and defined in the project description ChainKeeper is not capable to use as an analysis tool due to its constraints. Retrieving block by clock or tx by tx is not a good way for the analysis scenarios. There should be a more efficient and optimized way to do this. This proposal will address about the above mentioned problem with an optimized blockchain analysis library.
This project aims at integrating a JavaScript engine into VLC to enable extensions development and scripting support inside VLC in JS. This project should replace the already existing Lua engine.
Getaviz, last year added A-Frame as visualization framework. However, the performance of these visualization is still unsatisfactory. The problem is most likely, that the generated visualizations need too many draw calls. The goal of this topic is to improve the performance of these visualizations. BufferedGeometry and instancing are two important methods that can help reduce draw calls and improve performance by get rid of unnecessary calls to the GPU and save on rendering.
The Commons RNG emphasizes upon the utilization of Java-only implementations of various standard pseudo random number generators i.e. the generators produce deterministic sequences of bytes, currently in chunks of 32 and 64 bits, with focus being placed on fast generators with strong uniformity and predictability, with this project involving the creation and debugging of various Pseudo random number generators, including :
Further details can be found in the JIRA Issues forum link below : https://issues.apache.org/jira/projects/RNG/issues/RNG-32?filter=allopenissues
Many scientists around the world perform structural network analyses on the MRI data during neuroscience experiments or studies. Scona was designed to make this experience better and to help researches run brain analyses quickly and reliably in order to capture many aspects of brain structure and function. Right now it is a well-documented, tested python package that is easy to use and re-use.
My project is mainly focused on the visualisation part that will be included in the scona package.
The project aims to build a machine translation model that can convert Sumerian (Language used around 2000 BC) to English Language using Neural Networks. The model should be bidirectional i.e It should convert Sumerian to English as well as English to Sumerian. Neural Machine Translation (NMT) is a new and highly active approach, which has shown promising results for machine translation task. I would like to use Basic Encoder-Decoder Architecture for Machine Translation Problem both encoder and Decoder will be implemented using RNNs (specifically LSTM and GRU units) where Input encoded sentence along with previously learned word embedding as vector will be given to encoder which generates a fixed length context vector, Decoder takes that vector and will generate target language translation. But it performs poorly with longer sequences sentences, to overcome this problem we will improve it with attention-based encoder-decoder model. For complete details please have a look at PDF attached or Google Doc draft.
Model Visualization Plugin for App4MC - Eclipse IDE
The world in which we live in is evolving faster than anyone can expect. New Technologies and Systems assisting us in our general life, from the invention of the electric power, over smartphones, up to self-driving cars. This was a big step in human evolution and gives us a small insight on what is to come in the future. In this future the technical requirements are going to be higher than ever. This will increase the importance of multi core real time systems more and more.
The goal of this project will be to increase the important aspects of the App4MC platform with a user friendly and customizable representation of this data in the form of a plugin for the Eclipse IDE.
OCR is a very wide application which translates characters in the image to an editable format. OCR on television news shows would recognize any text appear on a screen and translate this text to editable formate at one-second intervals. During OCR development some challenges appear, one of these challenges is to detect duplication if text repeats in successive frames, the application will handle this case. Another challenge is to see if it is time to make space between words or not (i.e. space detection) as the output may be all words concatenated together and space some times not detectable in frames. My work will be based on CNN and BLSTM VS open source libraries the best accuracy will be deployed. The project will be deployed on HPC machines using Singularity containers. Another contribution will be also done an enhancement on ASR (Automatic speech recognition) which is done last year by Ahmed Ismail. This project will be trained on a new dataset and new modification to the architecture may be done (if required ) to increase it is accuracy.
The current representation of reference genomes is as a sequence of nucleotides akin to a long string. Intuitively, this doesn’t represent a genome but rather a consensus. A workaround used today is holding variation data in VCF files that don’t update the reference meaning the reference will always represent the genome as it was not as it is or as it’s evolving. It’s clear that the current method of representing genomes is not ideal. There is a need for a representing the reference in a data structure that contains its inherent variation. Different methods have been tried and the variant graph is a promising one. The way the graph works is by representing the variation within the genome as alternative paths one can traverse the graph through and conserved regions as nodes without alternative paths to the get to the next node; we then index the nodes for querying and alignment. Moreover, variation graphs hold an advantage with rapidly evolving genomes and short read data that could get thrown out when it doesn’t have a place to align to in the reference; with the variant graph short reads should reads could align to alternative nodes.
Genpipes is a set of software pipelines designed for genomic analysis. There are currently seven different pipelines and three more are in the development phase. These pipelines consist of several steps (10-40). Therefore keeping track of which step was executed, what functionality did it provide, at what time it was executed, whether it failed or succeeded can be a cumbersome task. The project consists of an integrated system that automatically builds a flowchart of the steps executed by the user. This would be a user-friendly add-on to the software as having a flowchart of the steps executed would help in the analysis and understanding of the process performed.
A changepoint is typically defined as a point in time where the distribution of a data-stream changes in a distinct manner, for example, typically one may look for changepoints in mean, and/or variance. Usually, this is performed in an unsupervised setting where we have no labelled examples of true changepoints. However, in practice, we usually have examples of periods of time where we know no changes should be present, or conversely where changes are expected to exist. When and where such information is available, we can potentially use this to aid our judgement of how to set complexity penalties in the changepoint estimation task, and thus, decide on an appropriate number of changepoints, a task which currently requires time-consuming parameter tuning by domain experts.
In this particular project, we seek a simple visualization interface to support, i) human labeling of the data with the aid of several complementary measures on the data-stream such as as mean, trends, min, max, variance, etc. and ii) interactive exploration of the result space suggested by a changepoint detection algorithm. Eventually, any visualization will help communicating the data and respective decisions.
The aim of this Infrastructure Project is to migrating all the third party libraries which oppia uses into their newer versions with no regressions, this will help oppia to solve some issue in the issue tracker whereas upgrade of few libraries will provide significant improvements in terms speed and user experience.
The VLC interface is quite outdated on Linux and Windows. It has a lot of features, but some are not properly exposed.
This project for the summer is to rework heavily this interface to make it beautiful and useful again.
This project aims at improving the LibreOffice's Online implementation over Android by adding new features to the document viewer, fixing the most annoying bugs and enhancing the overall UI/UX on the basis of design and performance.
Though the basic functionality of Bassa is complete and working, however AngularJS is not much supportive right now so the codebase needs to be shifted to a stable release. ReactJS with Webpack are to be used for creating the new codebase.
Improving VisualStates, a great tool for easily creating complex robot behaviour, by adding parameterisation to its automatas and an online automata library.
TensorMap is a project that will be a web application which will allow the users to create machine learning and deep learning algorithms visually. TensorMap will support reverse engineering of the visual layout to a Tensorflow implementation.
This proposal aims at improving the efficiency of the expression templates that are used by boost::numeric::ublas::tensor. Traditional Expression templates are efficient but Klaus Iglberger showed in his work that traditional ETs (Expression Templates) does not automatically result in faster execution of the expression. Smart expression templates are a tool for capturing the expression, possibly transforming and evaluating the expression. For better expression evaluation I will use Boost.YAP, a C++14 and later expression template library. Boost.YAP already offers a lot of functions and algorithms for dealing with expressions. It will ease my task moreover most of the issues pointed out in the paper cited above are efficiently handled by YAP. Hence, this proposal aims at the integration of Boost.YAP for convenient expression template transformation and evaluation in boost::numeric::ublas::tensor
Work on solving the issues and adding new functionalities
There is not yet any way in R to fully leverage the power of stochastic gradient algorithms for fitting generalized linear models (GLM). The overarching goal of this project is to develop the R package sgdnet into a mature state for the implementation of the efficient SAGA algorithm for elastic net-penalized GLMs, targeting the big data setting where observations greatly outnumber variables. It will result in a CRAN submission.
The goal of my project is to improve the existing Sugarizer-Server Dashboard. My proposed enhancements include:
Currently, Labplot has a basic set of measures for statistical analysis (Regressions, Histogram Plots, Location Measures, Dispersion Measures and Shape Measures). The idea is to add more statistical relevant features and also to generate some sort of report (using existing Worksheet widget) which shows all statistical analysis at one place (something like what is done in jasp), so that the statistician doesn’t need to add graphs and statistics measures manually and can easily analyse the data.
EOS delivers a set of icons that are made 1-to-1 following Material Design Icons. They are currently published at https://suse-uiux.gitlab.io/eos-icons/. The problem with this page is that it lacks engagement and information about the project: how to contribute new icons, how to submit icons request, how to install it, etc. On the other hand, EOS has a landing page that is more engaging, but there is a lot of useful information missing too. The EOS landing page could benefit from a section with a more automated way to scale the sub-pages and content with a CMS. Also, this project requires a web interface for EOS icons and EOS Design System, where both are aligned in terms of UX/UI and ideally can be managed with a headless CMS, such as Strapi [this is the currently used and preferred CMS at EOS]
VMAF is a new full-reference perceptual video quality metric developed by Netflix, which has a high correlation with subjective quality scores. This metric is widely used in industry, and I want to add VMAF plugin to GStreamer. Also, I have some additional interesting ideas related to VMAF accelerating and tuning.
QuTiP is best known for solving open quantum system dynamics. At the same time, it also has a Quantum Information Processing (QIP) submodule representing ideal quantum circuits. A tempting ideal to combine them is introducing random noise into the circuit by linking the circuit back to the Hamiltonian driving the evolution of the qubits. This could then be used to study the possible noise occurring in the experiments and how it will influence the result.
Lua, as a language, is made to function as a flexible scripting language to aid larger programs with minor tools that it can complete faster and easier than languages like C++ and Java could, but Lua lacks many of the core functions a scripting language is meant to have (i.e. file manipulation). Lua's light-weight nature is its strength, but programmers should be given a wide variety of options to enhance lua. Apolo is a library to add these core functions for projects that would require them to make full use of.
This project aims to get the GraphBLAS API working from Julia by connecting it to the SuiteSparse GraphBLAS implementation and creating a graph type backed by the methods provided in the library.
This project aims to develop a new fully-functional language server for D programming language using dmd library.
An app developed on Rocket.Chat App engine to integrate Google Calendar with Rocket.Chat. This can be used to create and view private and public calendar events.
The aim of this project is to bring SpaCy and Gensim functionality to Pharo. Along with that, I plan on integrating all the existing functioning NLP packages into a united library with a uniform API and good documentation.
Carrier synchronization in standard, mass-market GNSS receivers typically utilizes well understood locked-loop architectures. The performance obtained with such architectures is sufficient for benign propagation scenarios, but typically deliver poor performance under harsh propagation conditions. Code and carrier tracking, as well as joint code/carrier synchronization, can be formulated as an estimation problem which can be solved using Bayesian filtering methods. It has been shown in the GNSS literature that KF-based synchronization solutions can be used to overcome the performance limitations of standard approaches, offering implicitly adaptive filter bandwidth, and opening up the possibility of using nonlinear models to avoid certain limitations associated with the use of code, phase or frequency discriminators. In this contribution, we will leverage powerful nonlinear tracking algorithms, including cubature, unscented, and sigma point Kalman filters in order to produce realizations of carrier and joint code-carrier tracking blocks which promise to be more effective and adaptable in challenging GNSS environments when compared to traditional architectures.
I would like to work on issue #70, “OncoKB Analysis in Study View”. An inconvenience for both biologists trying to get valuable information to make informed decisions on prognosis and patients who would like to readily have access to this information for their own comfort, is the time needed to access this data. As the genomic data sets for each study vary in size, viewing annotations for studies that have large data sets in real-time is a challenge because it takes a while for the information to load. For example: MSK-IMPACT 2017 has a data set of 10,000 samples, so it will take a few minutes for the study view to load. I would like to work on the three backend tasks and first frontend task. This means pre-annotating the mutation data and storing it into the MySQL database and creating an additional column in the mutation table called “annotation”. Querying and using the web API to provide the information needed to solve the front-end goal of creating pie charts for oncogenicity and highest level of sensitive therapeutic implications per sample.
Bespoke Silicon group at University of Washington is working on the second version of their open source RISC-V manycore processor. They are also working on a CUDA-light programming environment using LLVM. This GSoC I plan to extend RV32 LLVM backend incorporating new hardware features.
When we are working with a dataset with geolocation, we need a tool to display it. The idea is build this tool. We will be able to upload the kml or json data, we are going to select the data type and which row will be the geolocation. We will be able to choose the legend and build new ones.
We plan to incorporate into bdvis two state-of-the-art elements: interactive plotting and dashboards. We plan to develop and test an interface that enables graphics interactivity with ‘drilling down’ capabilities.
Diagnostic visualization can unveil hidden patterns and anomalies in the data, and allow quick exploration of massive datasets. Developing novel interactive visualizations coupled with a modular dashboard system for biodiversity data, that can easily be employed by R experts and novices alike; will undoubtedly promote biodiversity research.
This project would add a feature on the rubygems.org website to give an option on each gem web page to show a tree/DAG of all transitive dependencies needed by that gem. In one of the issues that lead to this project, a user talks about how some users download a gem based of it’s transitive dependencies and how they need to keep a mindmap of these transitive dependencies to conclude whether or not to use the gem. This project will help such users get a better idea of whether that gem is for them or not.
The goal is to work on a GUI interface where the user can either type or paste in ASCII art for a Purr Data diagram and have it converted to a floating selection in the current Purr Data diagram. The ability for the user to type ASCII art into an object box that will be created and get converted into Pd diagram would be useful. This would let users paste ASCII art from the mailing list directly into the interface and make it easier for them to create Purr Data diagrams using only the keyboard. Also, the goal includes working on converting a subset of Purr Data diagrams into ASCII art after picking a subset of convertible Pd diagrams to ASCII art from the text archives of the Pd mailing list that can be parsed.
Eclipse CBI is currently in the process of migrating all the Jenkins instances it manages to a Kubernetes (OpenShift) cluster. The goal of this topic is to implement a dashboard listing all Jenkins instances running in the Kubernetes cluster but with live data from the cluster such as pod status, resource usage, Jenkins queue size etc.
The Genomic Data Commons (GDC) Portal serves as a large-scale genomic data repository hosting data from NCI cancer genome projects in standardized formats amenable to programmatic access. As one of the major goals of the cBioPortal is to serve cancer genomic data from a wide range of sources in a easy to analyze manner, the creation of an Extraction Translation (ET) pipeline between the two platforms would greatly benefit the cancer genomic research community as a whole.
A previous GSOC project laid the foundation for this pipeline, developing a Spring Batch pipeline that takes a GDC manifest file specifying the data desired and creates a clinical file and a Genome Nexus annotated MAF suitable for cBioPortal import. This project will expand on the pipeline, creating new Spring Batch reader/processor/writers and corresponding pipeline logic for CNA and mRNA expression data, two data types similar in format and useful in conjunction to analyze over and under expressed genes in a given sample.
We all know that large data set is crucial for good performance models and we have millions of parameters that need to be trained.. My proposal is to add data augmentation module to opencv similar to the ones in other libraries like keras for example.
This project involves building a webapp to replace the current user onboarding flow for centos.ci.org
There are works dedicated toward Optical Character Recognition (OCR) for images, but there has not been a robust pipeline aiming to generate OCR for video recordings not only for English but also for other languages. Combining with some state-of-the-art detection technique and OCR tools, I propose a pipeline that can potentially detect and recognize texts appearing in International Television News, using Mask R-CNN and ShueNet, with the auxiliary of ASR.
Integration of ActionCable to implement web Sockets for Various Real-Time communication in the plots2 project such as for Notification, Comments etc.
Sensor Data and Upload Library project for making a reusable package that can be used independently to convert and analyse CSV data into Graphs or Charts.
I propose to improve the installation experience of Amahi by completely removing the method of INSTALL CODE and setting up basic Amahi HDA without an API key. And then work on Home Automation stuffs and try to integrate well with Amahi. The problem is that right now, Amahi can’t be used without registration. So, those who want to try out Amahi without registration, can’t do so. So, we may be losing a few users who don’t like some hassle of registration So, I will change the code of hda-ctl and hda-install so that basic Amahi hda can be set up. Then I will change the Anaconda Module accordingly. Then I will work on creating a package for Home Automation stuffs and integrate it with some extent to Amahi. Since, I have worked with Amahi before, especially with installing and setting up Amahi division. I have deep knowledge of how Amahi gets set up and runs in the background with hda-ctl. So, I have deep knowledge of code for hda-ctl repo. I have done extensive app testing in my previous GSoC so, I have some knowledge of Amahi Apps.
Reinforcement learning is a class of machine learning methods that use the reward signal of the environment to infer optimal actions. Thus, easy environments for reinforcement learning agents are those with dense reward signals. In these environments, most algorithms included in TF-Agents already achieve good results. However, in sparse-reward environments such as Montezuma’s Revenge from Atari 2600, such algorithms fail to discover optimal behavior as it rarely finds any nonzero reward. To mitigate this problem, multiple methods have been proposed to add intrinsic rewards or “curiosity” to existing algorithms to extend the capability of these algorithms.
This project will implement three curiosity modules to TF-Agents: pseudocount-based exploration via Context Tree Switching (CTS), Intrinsic Curiosity Module (ICM), and Random Network Distillation (RND). These modules will be implemented as modules so that they can be enabled or disabled in one line for existing reinforcement learning algorithms in TF-Agents. With the addition of these modules, the users will be able to gain meaningful results without careful reward shaping on sparse-reward environments.
This project includes adding BLAS, Sparse linear algebra and iterative solvers into Chapel.
Eclipse SWTChart allows to create different types of charts. The API is well designed and allows to create Line, Bar and Scatter charts easily. In addition to that, charts can be created even more easily with the SWTChart extensions. It uses the convention over configuration pattern and offers many additional improvements to scale axes of different type automatically or to select specific data ranges.
This project aims to extend the export options already available under the SWTChart extensions menu. Currently available export options are PNG, JPG, BMP, CSV, LaTeX, R Script.
New export options to be added are .PDF (Portable Document Format) .SVG (Scalable Vector Graphics) .EPS (Encapsulated PostScript)
Communication using body language is an ancient art form, currently evolving in many fascinating ways. And automatic detection of human body language is becoming an active subject of research due to its application in various vision-based articulated body pose estimation systems such as Markerless motion capture for human-computer interfaces, Robot control, visual surveillance, Human image synthesis. A specific part of this field, gesture recognition, has gained great attention in recent years. Current focuses in the field include emotion recognition from face and hand gesture recognition.
This project proposes an automated system for hand gesture detection and recognition in TV news videos. Given a news video of certain time duration, the automated system will not only detect the hand gestures in the video but also provide a label from among the set of hand-gesture classes. Finally, incorporate the system into singularity container for deployment on high-performance clusters (HPC) clusters.
scipy.fftpack provides several variants of fast fourier transforms for use in numerical and scientific computing applications. Currently this a python wrapper around it’s namesake, the fortran fftpack library. However, a number of concerns exist about the precision and performance of the current fft implementations. Thus, it is desirable to allow 3rd party libraries to be used instead of fftpack, allowing for improved performance and accuracy.
This project would first design and implement a backend interface which will allow different libraries to be called underneath the scipy.fftpack interface. Then a selection of 3rd party fft libraries can be adapted to implement this interface and provide users with a range of backends to choose from. These backends may be selected at runtime to accelerate existing users of the scipy.fftpack interface without any changes to their code.
Graal produces analysis related to code complexity, quality, dependencies, vulnerability and licensing and the data produced conforms to the ones that can be processed by GrimoireLab. I will mainly be focusing on:
Out of all the five backends provided by Graal, CoCom (Code Complexity) covers a vast majority of the popular languages and CoLic (Code License) supported by NOMOS & ScanCode helps us fetch license & copyright related information from software development repositories and is language independent. Addition of metrics related to these two backends during GSoC period could be applied to a wide range of projects in the future.
I would be working on the following features, and some additional features which are mentioned in my proposal based on Community remarks,
1) Add Multiple account login and maintain session of the account.
2) Add Passcode feature to avoid the everytime login.
3) Sync Adapter to sync clients and client data that is assigned to him.
4) Add Kotlin support in app and initially change the retrofit models in kotlin.
5) Add review screen in every new records creation form.
6) Edit Loan application feature if loan is not approved
7) Add new data views in different different pages like customer detail page according to API, same as in loan, deposit account details
This project aims to create a Graphical User Interface (GUI) for big gridded geospatial data visualization in the browser interface backed by the full power of the Python ecosystem. This GUI would allow controlled data points selection, massive rendering, data display, custom interaction, selection of fields for plotting and layout of widgets in the browser using Intake, Xarray and Pyviz collection of tools. Currently majority of geospatial data exploration happens in stand-alone applications like Panoply and NcView, tools that have limited functionality, do not provide complex analysis methods and can only be reasonably extended by the software developers on those projects. This new tool, written in Python, but presented as a dashboard in the notebook environment, will be extendable directly by researchers by using it in conjunction with tools like Dask and will also provide complex analysis methods on the data being visualized. It holds the promise of saving Earth Science and other researchers significant amounts of time since they can directly focus on visual data analysis and research rather than writing custom code to explore data.
In this project I am going to create a new language pair Uzbek-Karakalpak. Because, There is no other single translator between these two languages despite their similarities. Also, this project would open new ways for Karakalpak language to connect with other languages as well. In addition I have made the project (https://play.google.com/store/apps/details?id=com.shagalalab.sozlik): Russian -> Karakalpak and Karakalpak->English dictionary, therefore I have an access to the biggest Karakalpak language dictionary which I am going to use it here. So I believe that I can easily make a transducer for Karakalpak language.
Using Django To Create GUI For Input Error Visualizationa and Correction
In recent years, ns-3 has been widely used for the simulation of wireless networks, because it features several built-in and external modules implementing different wireless technologies. The overall performance of this kind of networks are strongly influenced by the characteristics of the signal propagation through the wireless link, thereby, a proper modeling of the channel behavior is of primary importance to obtain reliable results from the simulations. This project aims to tackle this issue by proposing an extension of the spectrum module to model both frequency and spatial-dependent phenomena, and to account for the directional behavior of the signal propagation. This will be achieved by implementing the modeling framework described in 3GPP TR 38.901, which includes the statistical characterization of different propagation environments, supports the modeling of multi-antenna systems, and, thanks to its modularity, can be easily extended with new environments or other additional features. Even if it has been specifically designed for the simulation of cellular networks, it supports frequency bands between 0.5 and 100 GHz, thus can be used even for other wireless technologies.
ipptool is used for development and debugging of IPP-related software and for PWG self-certification for IPP Everywhere printers for driverless printing. I am to develop additional ipptool test scripts for all new operations, objects, attributes defined in IPP System Service v1.0
Audio Worklet, an extension to the Web Audio API currently available in Google Chrome, allows developers to write their own AudioWorkletNodes that will run audio processing code on a separate thread. This replaces the deprecated, less efficient ScriptProcessorNode, which runs on the main UI thread.
For this project, I will replace all instances of ScriptProcessorNode in p5.Sound with AudioWorklet, with a polyfill for browsers that don’t yet support AudioWorklet. This will improve the efficiency of the p5.SoundRecorder and p5.Amplitude nodes, and possibly also p5.SoundFile.
This project revolves around making the xapian-letor and xapian-evaluation module releasable. It includes writing extensive tests for various High-level API's and low-level pieces, the addition of new rankers and scorer metrics and adding binding support for letor module in various languages.
GNSS Reflectometry (GNSS-R) is the application of GNSS signals to determine geophysical parameters of the Earth’s surface, as well as the atmospheric layer. The idea is to jointly exploit the direct and reflected GNSS signals. These reflected signals are particularly interesting on water or ice surfaces. The receiver can be located at all altitudes: from the ground to low Earth orbit (LEO) satellites. Two major types of measurements can be made: i) relative power measurements between direct and reflected signals from which we can derive, for instance, the surface roughness, and by extension the surface wind (this is the mission CYGNSS), and ii) measurements of relative delay between direct and reflected signals.
Rubyplot only supports GR back-end, my project is to extend this support to Magick back-end, adding plot function to Rubyplot and integration with iruby notebooks. This project will greatly enhance the plotting interface of Rubyplot and pave the way for much greater expansion and allow users to test and debug their code easily.
In this project, I propose to add Recurrent Neural Networks to ChainerX. RNNs are a very integral part of deep learning research. ChainerX is faster as compared to other frameworks (eg: Chainer, PyTorch) since its core code is completely implemented in C++. It currently lacks implementations of RNNs. In this project, I will provide C++ implementations of RNN models (LSTM, GRU). This will bring the fast speed of ChainerX to RNN models. I aim to replicate all RNN-related chainer ops in ChainerX. I will also provide sufficient code examples on how to use the RNN models of ChainerX.
In this project, my work will mainly focus on working as a full stack developer for Amahi. I will work on to implement new features and functionalities for Amahi 12. I will improve front end of the platform using the latest javascript and jquery functions, develop new plugins and implement new features for it. Last summer, I have implemented most of the parts of friending plugin. I will implement the remaining parts of this plugin, test it and make it work for the production and release it for the latest version of Amahi. I would also like to work on docker related stuff. New docker version has many great features, that seems to be super useful for current docker-compose implementation. I will also implement new plugins for Amahi 12.
This project will aim to create a pipeline for all CPTAC data transformation into cBioPortal compatible files for public access. Conversion and addition of novel CPTAC proteogenomic datasets into the cBioPortal will advance cancer studies, which will lead to a better understanding of the molecular profiles of cancer.
GraphicsFuzz is a tool that helps graphics driver makers capture defects by fuzzing and rendering semantically equivalent shaders. The tool applies the metamorphic testing technique that transforms shaders using the available functions in OpenGL shading language (GLSL). Currently, GLSL built-in functions are not fully supported by the tool. Providing a new set of GLSL features to GraphicsFuzz would create new ideas of transformation, and help the tool to detect a wider range of bugs.
This project aims to achieve a better ecosystem for Images.jl, an image-processing toolbox in Julia. Main contributions consist of user-friendly documentation on Images.jl ecosystem, developer manual, and more consistent, robust, and extensible APIs. Moreover, this project also serves as a subproject to bring Images.jl from pre-Julia-1.0 stage to post-Julia-1.0 stage, and eventually to Images.jl v1.0 milestone.
To develop official helm chart of xwiki so that it can be seamlessly deployed to Kubernetes. The chart should be configurable and highly available.
coala currently has few generic bears which are not very maintained. This project revolves around the objective of getting most generic bears in a working state so that they can be used in production.
Apart from the current bear, this project brings many new generic bears which have been requested by the community from a long time but they were not worked upon. Such bears include OutdatedDependencyBear, FileModeBear, FileExistsBear, RequirementsCheckBear, etc. Apart from these, there are several improvements to PEP8Bear and PycodestyleBear. All of the bears are well tested and documented. This project also brings a few improvements to coala core and upstream repositories.
Amahi-Anywhere file server is complete and working as of now, but there can be certain improvements that can be done to so as to improve both, the code base as well as user experience. This proposal focuses on the implementation of those improvements and optimizations. Improvements in logging system will help the developers to get a better knowledge of the events happening in the system. The addition of a good caching mechanism will ultimately lead to a reduction of latency by a huge factor and thus will greatly enhance user experience.
This project also focuses on the implementation of some features which in turn will make the management of the Amahi app easier with the help of special dashboards where users can easily monitor the system and take action accordingly.
Commission and curate new examples for http://p5js.org/ on the occasion of the upcoming 1.0 release to showcase the power of community and collaboration in open source and creativity on the web. Examples co-created by people across disciplines and experience levels to demonstrate collaborative and inclusive practices of building software (i.e., pair programming using the new p5.js Web Editor) are especially encouraged.
Implementing gearwork and communication interface for the USB 3.0 plugin module, designed by apertus° Association, for streaming live 4K 12-bit video data above 25 FPS through a USB 3.0 interface to a connected PC.
This project will improve the GitBook of syslog-ng by adding a chapter for plugin programming, with sections dedicated to the various types of plugins found in syslog-ng. A non-trivial example plugin will accompany each section, to aid in the explanation.
This project will add the capacity for Perl 6 to produce and execute self-contained binaries on Windows, Linux, and MacOS. I will start with an executable binary for a basic Hello World program, and incrementally add support for linking to shared system and user libraries. Time permitting, I will work with my mentors to enable support and/or work arounds to enable linking to C and other languages.
My approach will be similar to how the .NET Core creates self-contained and framework dependent deployment binaries. I have chosen this as a model as it has proven capable of running on Linux, Windows, and MacOS systems. While my work may focus initially on either Windows and the Portable Executable (PE) or Linux and Executable and Linkable Format (ELF), this approach will hopefully ensure a path forward that will eventually be capable of producing executable binaries on all popular modern operating systems.
The basic method which the .NET Core uses to create executable binaries is by creating a manifest of information in the operating system dependent executable binary format which is linked to .NET’s runtime library, and handed to a .NET “main” function which executes the user program.
The project will aim to enhance the existing annotation capability and add another type of annotations like dimensions , labels and single or multiple lines notes the idea is to make all kinds of annotation share the same logic in entering the properties of annotation and the graphical representations and taking advantage of the existing primitives like lines and fonts to represent the annotation this will help to exploit the existing capabilities of these primitives like rotation and positioning .
Nuitka is a Python compiler written in Python. It is a seamless replacement or extension to the Python interpreter and compiles every construct that CPython does. Nuitka works by translating Python code into a C level program which can be executed in the same way as CPython using libpython and a few C files of its own. All optimizations of Nuitka are aimed at improving performance while ensuring perfect compatibility. This project ensures Nuitka’s compatibility with the top 50 PyPI packages by setting up automated testing for each package. These automated tests will serve to be very important tools for the development of Nuitka, as they will be used to assure Nuitka’s compatibility with the most used Python packages every time Nuitka receives a new update.
As part of GSOC 2018, a Password Reset project was initiated which was meant to revolutionize the way password resets were handled in OpenMRS. It was completed successfully, with extensive work done in the backend. But there is no accompanying user interface for that, thus the goal of this project is to serve as a continuation of the corresponding GSOC 2018 Project, to build a user interface for the Password Reset mechanism, and also to introduce some security measures like CAPTCHA, location identification and some other things, into the current Password Reset Scheme.
NVDA provides support for the Windows command console used by Command Prompt, PowerShell, and the Windows Subsystem for Linux. It allows users to read the console's contents using review cursor commands, and automatically reports console output as it arrives. While functional, this support has some shortcomings, leading to decreased NVDA performance and stability and, by extension, user productivity. This proposal seeks to take advantage of changes to the command console introduced in the Windows 10 Fall Creators Update and refined in later releases, leading to an improved console experience for users on current Windows versions.
qual.net’s Rust rewrite will enable improved modularity, performance, and security, but rewriting an entire system in a new and rapidly evolving language is a difficult task. A network simulator able to mock out the entire world of devices and connections on which qual.net will have to operate is an important step in ensuring functionality is implemented as intended.
This project will result in a working MVP of a dual layer network simulator. This simulator will provide hooks for the netlink and HTTP API levels, into which tests can inject behavior and from which metrics and test results can be extracted.
LLVM functions can be tagged with several attributes. These attributes are used in optimizations to decide whether a particular transformation is valid or not. Functions attributes can be either given by the frontend or be inferred by LLVM. There are some attributes which can not be inferred now. If we could infer them, more optimization would be done. In addition to this, there are a lot of problems in current implementation. First of all, each algorithm is very similar but implemented separately. If we extend these implementations to infer other attributes, there would be a lot of code duplication and it is hard to maintain consistency. Therefore, it is better to replace them with a more designed implementation. We have a new framework for deducing attribute called “Attributor” developed by Johannes Doerfert. So I propose to extend this framework and deduce attributes.
This project aims to build an elaborate multimodal emotion recognition system inside a Singularity container. I will train a state-of-the-art audio-visual emotion recognition model on different datasets. In the proposed system, speech and video data are processed by two convolutional neural networks (CNNs). The outputs of the two CNNs are then fused using two consecutive extreme learning machines (ELMs). The output of the fusion is given to a support vector machine (SVM) for final classification of the emotions. I will use UCLA’s NewsScape dataset with ground truth annotations to evaluate the system. The system will be eventually incorporated into the Red Hen pipeline at CWRU HPC so that it can be used to recognize emotions in user-specified videos.
Clad is a C++ Clang compiler plugin that employs automatic differentiation to derive user-defined functions, performing source code transformations so that users do not have to conform to custom types to comply with external libraries.
With the integration of clad::gradient to CLAD, a reverse accumulation method for automatic differentiation was introduced. Now, it makes sense to move on to second partial derivatives, in particular, to calculating the Hessian matrix. My work is to build on the existing framework that uses Clang AST to do source transformations on functions, and to implement an efficient Hessian calculation method that extends the capabilities of CLAD using the edge pushing algorithm and Hessian reverse accumulation method. I will also try to extend existing CLAD functions to calculate the Jacobian matrix.
CGAL provides now basic viewers, i.e. global functions allowing to visualize in 3D some CGAL data-structures. These small viewers are very usefull in order to visualize the result of an algorithm, and can help to debug some code. For now, 5 basic viewers exist, for Polyhedron, Surface_mesh, Triangulation_2, Triangulation_3 and Linear_cell_complex. The goal of this project is to develop more basic viewers for other CGAL data-structures.
Project is to build a sequence-to-sequence encoder-decoder model as a module of the Hasktorch library, and demonstrate it on an NLP task.
The aim of the project would be to develop an ASR pipeline utilizing the existing news conversation dataset and audio pipeline codebase. Additionally, It will be adapted to each speaker to obtain speaker adapted ASR models.
This project aims at reviving Promotion into the pipelines by introducing a new plugin which includes major updates and redesigns done on a copy of the existing promoted-builds-plugin.The end result here would be a robust plugin which can run an on-demand promotion when the pipeline build is completed along with other artifact tracking and job triggering functions.
This project aims to improve the performance of the Android Client and add a visual appeal to it. It will aim to bring Analytics to the app that will help both the Developers and the End-Users of the app and benefit from it. Another important feature will be bringing the Provider Module that is being used in the Web-App to the Android Client.
Build an inventory of the features present in the legacy UI that need to be moved to OWAs Replace existing JSP UI with OWA based on React Components then delete the JSP pages and build missing web service end-points which may not exist.
This game will encourage female teenagers to develop their own games in the Catrobat app. As a tutorial game it is the gateway to Catrobat and its appeal and its learning effects will decide, for many users, weather or not they become active users of the app. For many teenage girls, this will be their first experience with coding and, if it is designed successfully, it may direct their interests towards programming, a field they may have never considered otherwise. The effectiveness of the game will be evaluated in the context of a study directed by the Institute of developmental psychology of the Karl Franzens University, Graz.
This project will contain two parts
Capture Stations
Bash/Python scripts to perform tasks like:
Coop
It will consist of a comms component which can communicate with the capture stations through SSH, a controlling component which can perform backup upon user command and a frontend that allows visualization through a centralized dashboard or cockpit. Configuration and adding of new capturing stations through the UI of the dashboard is a stretch goal.
Brian 2 is the new version of Brian software. In 2010 the ModelFitting toolbox was developed for the original version (doi.org/10.3389/neuro.11.002.2010). The main aim of this project is to adapt the ModelFitting to the new version of the software. Access to the module will allow computational neuroscientists to fit the experimental data to the neuron models.
The main goal of the module is to find the best mechanistic representation of the recorded spike trains. It has to be re-implemented for the needs of the new version of Brian and requires access to a new optimization library.
The goal of the project is to fish the proposed help wanted issues in PiPot GitHub branch, and set up Travis CI for PiPot. Along with feature development and unit testing, we also want to do more research on honeypot, to see whether we can integrate more features/service into the current PiPot.
One of the major challenges faced when applying model checking is the state space explosion, due to which it becomes impossible to detect errors in many cases. The main aim of this project is to utilize multiple cores for state space exploration in order to achieve massive speedup.
One of the most popular features in Rocket.Chat is auto-translate, where users can set their language preference to have all messages translated. This feature works by translating every incoming message into the user language of choice. We would like the student to make it possible for the package also to translate every message the user sends into another language. As a plus, if the student can implement a channel language setting, it would be superb.
The project is already about 2 years old and there are lots of features. So the chances of introducing new bugs by changing the existing code or adding new features are really high. And before each build the application needs a thorough testing to make sure nothing is broken.
Currently project has some end-to-end tests for IOS platform but unfortunately no tests for Android. And also the Continuous Integration job on Circle CI with IOS tests is failing, the build stays always ‘red’.
The aim of this project is to add GCC C Extensions such as attributes, vector extensions, nested function, etc. in Csmith. Run Csmith against GCC and report compiler bugs if found any.
Following GCC C Extensions will be added during GSOC timeline -
The task of recognizing and predicting human daily activities is a trending topic nowadays, and a lot of research has been developed around it, accompanied with the creation of algorithms that achieve state-of-the-art results on different human activity data sets: CAD-60 or CAD-120, UTKinect-Action, Florence3D-Action data sets, etc. The application of a machine that can detect a person’s actions is broad. It has been developed for gaming, Human-Computer interac-tion or Active and Assistive Living.
For the development of RoboComp's framework, my work will be based on the article of Premebida, Souza and Faria (2017) where the proposed algorithm reached an accuracy of 94.74% and a recall of 94.74 % on CAD-60, which is considered as a state-of-the-art result. These articles are mainly based on Dynamic Bayesian Networks and a Dynamic Bayesian Mixture Model (Faria,Premebida,Nunes(2014)). Other works will be taken in account to take advantage of what we have available, combining machine learning approaches with mathematics to get robust results, such as the introduction of Partial Differential Equations or Lie groups, explained, for example, in Vemulapalli et al. (2014)
The Data Repository Service (DRS) API provides a generic interface to data repositories so data consumers, including workflow systems, can access data in a single, standard way regardless of where it’s stored and how it’s managed. The main goal of this project is the generation of a prototype server for DRS API that follows the current draft specification of the DRS
Amahi is a personal/Home server media software with the support of mobile apps and other devices that allow us to stream and access all types of media files present in our own servers (also called as HDAs). Currently, the android app works great, but there’s a lot of scope to improve it. This project mainly focuses on improving the following areas of the app:
The libvirt library provides a stable API for managing platform virtualization utilizing different hypervisors or virtual machine monitors. Each hypervisor requires its own driver inside libvirt, but not all API functions are supported by all drivers or hypervisors yet. In addition to the standard drivers, there is a supplementary fake driver, called test driver, designed to let applications test against libvirt with fake data and not have any effect on the host. As of today, there are still a lot of API functions not implemented by the test driver. The goal of this project is to expand the API coverage of the test libvirt driver.
This is my proposal to the project, improve DXF import and export for OpenSCAD. It includes a breif introduction of myself, project description and timeline for this project. The project description is consist of three part, preparation which selecting external library, Integrating which is for both import and export integration, testing which is for creating tests case and test output of import and export. The timeline is the development schedule throughout the summer. An alternative timeline is available as well. Changes might be made due to the discussion with the organization and potential mentor.
To Implement and test support for chunked transfer encoding in HTTP uploads (PUT/POST) in GNU libmicrohttpd.
Automated program repair has been gaining ground recently with substantial efforts devoted to the area. Not only has APR had great influence on academia recently, but also it has received considerable attention from the industry. Motivated by the potential impact of APR, in this project, we propose to build a repair framework that improves upon on past successful work VFix and S3.
In this project, we aim to fix NPE bugs. We will use data flow analysis as proposed in VFix to accurately localize buggy code fragments in program under repair. We then use S3 to semantically reason about the identified buggy code fragments via dynamic symbolic execution on test cases. However, simply relying on test cases may not be sufficient. We thus plan to further enhance the semantic reasoning in S3 with the ability to leverage user-provided annotations. After this step, we obtain a set of constraints that constitutes specifications of the program. We then propose a program synthesis technique that improve the template-based repair in VFix and syntax-guided synthesis in S3 to synthesize repairs.
NodeCloud is Node.js based API for open cloud. It works as a standalone core and depends on the cohesive plugins that extends its support onto different cloud providers. Currently, NodeCloud supports AWS, Google Cloud Platform and Azure and houses a handy plugin for all the three providers.
With GSoC ‘19, I aim to extend the provider paradigm of NodeCloud, by expanding to DigitalOcean & AliCloud
MailSync is a beautiful proof of work for the NDN project with the potential to make NDN's value understood through the popularization of this app. This project focuses on doing so by refactoring the app to remove unwanted behaviors occurring in the app. The project will add major testing code and benchmarking code to ensure that the application continues to be stable. Furthermore, additional security measures and validation will be added to the application. To popularize the app, the UI and UX of the app will be brought up to standards and NDN theme will be incorporated into the application.
Currently, pod6 files are processed by various scripts and modules (htmlify.p6,
Pod::To::HTML, Pod::To::BigPage,...), that has repeated functionality, low
level of testing, tight coupling between presentation rendering and source data.
Even pod6 files are compiled several times.
So, what I intend to do to change this? I have three objectives:
The Data Retriever is a package manager for data. The Data Retriever automatically finds, downloads and pre-processes publicly available datasets and it stores these datasets in a ready-to-analyze state. The Retriever project, however, suffers from some drawbacks which require attention:
MZmine 2 is an open-source software for mass-spectrometry data processing, with the main focus on LC-MS data. MZmine 2 supports data processing, visualization and analysis of mass spectrometry-based molecular profile data. It has various data visualization techniques like chromatogram plot, intensity plot, 3D plot, scatter plot, histograms etc. While the current visualization tools are quite helpful, there are some new visualization tools that will be quite useful for mass spectrometry data analysis and visualization. Some of the new tools are Cloud plot and Robust Volcano Plot which needs to be implemented in MZmine 2. The main part of the project will be to implement a new 3D visualization tool. The purpose of this tool is to clearly show the detected peaks in a 3D fashion. For now the tool is implemented in Java3D and VisAD libraries, both of which are not supported anymore with the new versions of java language. Therefore it is necessary to update the tool with the newer technologies such as JavaFX and its derived libraries like Fxyz. The main goal of this project is to enrich the capabilities of MZmine for data visualization and interpretation.
The purpose of this project is to extend SymPy's ability to generate code involving matrix expressions by allowing the transformation of SymPy's AST before generation. Through extensions to the codegen AST, this interface will allow SymPy to be easily extended to generate highly optimized library calls (such as to BLAS/LAPACK).
A second portion of this project is to integrate SymPy with the LFortran project by generating intermediate representations understandable by the tool. Code generation for Fortran can then be offloaded to the LFortran backend.
Until now, Robocompdsl can only be executed from the command line and its mainly functions are to generate a CDSL template and the code of an existing component or CDSL file. In order to make the user experience more enjoyable and also to avoid programming errors it is proposed in this document to create a graphical interface for Robocompdsl.
Bidirectional Packet Protocol for FPGA(Field Programmable Logic Array) communication is extending the I/Os of Xilinx ZYNQ by adding two Lattice MachXO2 FPGAs each has various bus protocols (I2C, SPI, GPIO ...). The Xilinx ZYNQ acts as routing fabrics that is connected to the MachXO2 FPGAs with a single LVDS pair and share a common clock with the ZYNQ. In adittion to optimize the communication with encoding and SERDES.
CGAL works on computational geometry. It has a Demo version of a package named '2D Regularized Boolean Set Operations'. With the use of generic programming and understanding of computational geometry, this project aims at adding support to existing Demo with operations like Minkowski sum and enhancing the functionality to support operations over a larger domain of polygons and improving the UI/UX for this Demo.
Nighres is an open-source Python package that enables high-resolution neuroimaging data to be easily and efficiently processed. Diffusion MRI is an imaging modality that is sensitive to random displacements of water molecules in brain tissue, yielding valuable information about the size, shape, and orientation of neurons. The purpose of this project is to help the neuroscience community by efficiently implementing an automated white matter parcellation algorithm in Nighres, which can be easily used without technical expertise, and by improving the algorithm to work with more realistic diffusion models than diffusion tensors. A comprehensive documentation and a tutorial will be written for the method to facilitate its use.
This project will bring tab completion support to LuaRocks and any Lua project using the argparse command line parser. Support for generating completion scripts for popular shells will be implemented in argparse. LuaRocks will be updated to use argparse for command line argument handling.
This project aims to port K12 mode from the legacy Tcl/Tk code base to the modern NW.js architecture. The focus is mainly on the frontend: adding a friendly user interface that allows effortless interactions with existing K12 abstractions. Besides, to further improve usability, full touchscreen support will be added as well. The proposal can realized in an incremental manner, frequent pull requests safely merged into the master branch throughout the summer.
Tweets from US democrats (or just anyone actively opposing Trump's presidency) and Russian conservators (or anyone supporting government) are collected and brought to a common denominator via translation-based and non-translation cross-lingual methods. Resulting corpus would be fascinating to research through several perspectives: for example get the political views through speech analysis, comparison of similarities in the vocabulary between people of similar and different political views, etc. Information can be retrieved through the Twitter API (Search or Stream, using Tweepy library).
Research showed that it is possible to detect early signs of Alzheimer Syndrome in speech. I propose to train a model, which will learn to do it (and later make an app, assisting people with the high risk of Alzheimer). Data can be extracted from the Talk Bank (DementiaBank) or, for example, Dementia Diaries website. The app could prompt their users to talk about their day and using speech recognition module could analyze the possibility of Alzheimer and changes overtime.
Retroshare has an existing WebUI interface which covers the basic functionality of the client. However it has limited functionality and does not make full use of the web platform.
The project aims to replace the old WebUI for a new and modern web interface which works by leveraging Retroshare's Json API which is already part of Retroshare internals. It will be made entirely usable on a web browser, and will make use of modern web functionalities, flexibility, and approachability.
Projects page link: https://projects.freifunk.net/#/projects?project=retroshare_port_web_interface_to_json_api&lang=en
CPAChecker is a framework which can be used as a software verification tool for C programs. We can use CPAchecker locally by command line interface or in cloud by web interface. Even Though CPAchecker is good for software verification and program analysis, one developer cannot use this CPAchecker inside an IDE. This massively decreases the usability of CPAchecker. The solution for this problem is creating a plugin that can make use of CPAchecker inside the IDE.
We come up with models that give the effective and efficient visualizations as recommendations to the given input. Current visualization tools require the user to manually select attributes and analyze the data. For someone who has limited time and domain,this gets challenging if there are millions of attributes to derive insights. To overcome this problem, we automate the process with deep models
Sugar Dashboard, a user dashboard which shows user information like last activity opened, last project opened, activities installed on your device, most used activity, and visualizing them by heat maps and graphs.
The Tamagotchi widget will replace the existing XO icon on the center of Sugar Dashboard. It will change its shape according to disk space, battery percentage etc.
The last part is to create a journal like activity. The current Journal can not be extended/modified by the end user without making changes to core Sugar. This activity will be similar to the Journal activity, but can be modified by a user who wants to make changes. This part might also include integration of Portfolio activity which currently uses Journal objects.
In this project, functionality to take input from a mix of ground stations and observations of different format will be added to OrbitDeterminator. Key features of this project will be:
- An easy to use interface for the community where the existing methods and algorithms
can be visualized
- A provision for the addition of observation inputs of all formats from a single station
or a mix of many stations to OrbitDeterminator
- Development of standard input and output parameters so as to ensure the conversion of
input data to our standard input format and its further processing
- Cleaning of the current codebase with the ultimate result of a proficient beta release
having all the new features incorporated by the end of GSoC 2019
The aim is to extend Data Sync ( Data Synchronization using Voyager Framework ) to android by porting of the current aerogear-js-SDK (in which it is currently implemented) to Android to cover up a larger user base and provide the services like Offline Support, Conflict Resolution, and the Data Sync services to the users which is the key feature provided by the AeroGear mobile services.
UniverSiS is a student information management system by HEI in Greece to fulfil all the academic and administrative tasks in a university / institution. This is a open source project and mainly focusing Greek universities and institutions. UniverSiS will be a great alternative for expensive solutions like BlackBoard ( www.blackboard.com ) , Edmodo ( www.edmodo.com ) and also definitely will be a better solution than troublesome Moodle ( www.moodle.org ). In Google Summer of Code 2019, what’s I’m planning to do is, implement all the given tasks using UniverSiS student management system and using all other necessary tools and frameworks.
Aim of this project is to include advanced features in DataFrame library such as handling missing values, as well as to improve documentation support, add tests, examples and tutorials demonstrating use of this library for various data analysis tasks.
The CDGen is an application using Eclipse APP4MC for code generation using the System Model to enhance cost-effectiveness and decreasing the chance of errors when compared to manual coding. The main outputs of this application are C and Header files which hold all the details of the model for the compilation and building process(generating executables for running on the Processor). Based on the Eclipse Modeling Framework, its capabilities not only include hardware and software modeling but in addition, tools for visualization and processing. The application will be added to the set of tools of Eclipse APP4MC.
Apache Mnemonic is a non-volatile hybrid memory storage oriented library and it proposes a non-volatile/durable Java object model and durable computing service that bring several advantages to significantly improve the performance of massive real-time data processing/analytics and helps to build cache-less and SerDe-less high performance applications. Mnemonic has two memory service based on “NVML”. On december 2017 NVML has changed it’s name into PMDK and has also started growing its libraries and tools. So, we need to upgrade Mnemonic volatile and nonvolatile memory service according to PMDK libraries. PMDK is tuned and validated on both Linux and Windows, the libraries build on the DAX feature of those operating systems which allows applications to access persistent memory as memory-mapped files, as described in the SNIA NVM Programming Model.
Currently RoboComp has a tool to deploy components that is being improved through several GSoC editions. It’s name is RCManager and is used on a daily basis by all the people that use RoboComp to program robots. Since this tool is crucial to the software development process and since robot software is all the time increasing its complexity as a large-scale distributed system, we need to improve this tool as much as we can. The first extension is to make the tool access remote RoboComp installations to create a list of potential components to be added to the deployment set. The second extension tackles the need to group sets of components into higher order entities, so visualization is simplified. A third extension is to include the capability to probe the edges in the graph of components so a pop-up window would show in real time the traffic moving through it. When connections between components use publish/subscribe modality the probing is easy, but when communication is done using the pull/request modality things get more complicated.
The idea of this project is to make a software in which a user can make deep learning models in an easy way using a graphical user interface with backend supported by tensorflow. Through the graphical user interface, a user will able to add, delete, edit deep learning layers in a model. The main purpose of the project is to make the implementation of deep learning models quick and easy.
The software will be built using electron-js which is a framework for building cross-platform desktop apps with HTML, CSS, and JavaScript. It will have a drag-and-drop feature to build deep learning models in the form of a graph which will then converted to a python code by the software. The generated code will then be executed in the child process which trains the deep learning model and sends the metrics data( loss, accuracy ) to the parent process which then plots the statistics.
When the designer section of phpMyAdmin was initially written, there was no jQuery. But now since it exists and due to the advantages of jQuery over JavaScript and it’s a necessity over here, we could make this section better by making use of jQuery and also resolve the existing issues. Also, this project will be beneficial for both the important stakeholders, i.e. developers and user community.
Currently GNOME Boxes is able to do either express-installations on a downloaded ISO or to download an ISO and offer the option to express-install it. This project will add the support for express-installations using the OSes’ network trees. This would reduce the download size and mainly benefit the users with not so good internet connection.
Swarm robotics is an approach to the coordination of large numbers of robots in order to tackle a given task inspired by the observation of social animals and their behaviour, making individuals tasks to resolve group problems. My proposal to this idea is to implement examples of collective behaviors using swarm robotics strategies in new or old scenarios of the RoboComp RCIS simulator, like collective exploration of the scenario, patterns formation, morphogenesis or another collective activities. Also, there would be a verbal or non-verbal communication between the swarm components in order to send some commands to the crowd.
ImageJ is an OpenSource image processing tool written in Java which has been extremely helpful in the analysis of scientific images especially medical and microscopic. ImageJ package consists of a plugin for segmenting all the cells in an embryo of a C.Elegan from raw SPIM images. This plugin which has been built as a part of GSoC’17 comprises of standard image processing techniques to segment the cells.
While attempting to segment cells from High-Resolution time-lapse movies of Embryogenesis, the used unsupervised algorithms like Intensity thresholding, Watershed Transform, Active Contours, etc, which depend on extracted local or global features, fail to be resistant against uneven illumination, optical noises, complex cellular shapes and more of such distractions. Thus, we are still in the pursuit of having a robust method for accurately segmenting microscopy images.
The top priorities of this proposal are: Extend the plugin’s capability to more accurately segment the cells by adding Semi-Supervised and Unsupervised methods. Focus on extending capabilities of tools for developmental DataScience with the help of datasets or develop a cell-tracking system for Bright-field movies.
Red Hen gathers Chinese broadcasts to make data sets for NLP, OCR, audio, and video pipelines. Currently, Red Hen have a preliminary ASR pipeline but it needs great improvement. This proposal is divided into 2 parts. The first one is to improve the ASR pipeline which contains 3 steps: find a source of correct transcript of the shows;use a different way to cut the audios; use new models to train the data. The second part is to build a CONCRETE Chinese NLP pipeline which includes basic tasks like data ingest, word segmentation, part-of-speech tagging,etc.
The project idea is to work on the development of the Containers library and to develop, collect, clean, test and document alternate collections and data-structures. It is important that each package is modular so that users can only load the collection they need without 100 of related collections, and thus significantly reduce the image size (modular design is of vital importance).
Main project goals: Document, refactor and test existing collections and migrate them to the Containers library (if needed). Develop new collections (with appropriate tests and documentations) and include them in the library.
Krita is only available for desktop OSes, my proposal is to add the support to mobile devices, android to be specific.
The proposal intends to build a reliable and maintainable method for android builds, improve look and feel of the android app, handle permissions, file system access and work on core usability of app.
The palette in Musescore is redesigned using a QTreeWidget. With the help of model/view programming and custom delegates, a new view is created to display the palette in two ways - List view and Icon View. Keyboard shortcuts are also implemented to access the branches of the QTreeWidget.
LLVM automatically derives facts that are only used while the respective translation unit, or LLVM module, is processed (i.e. constant function, error-throwing, etc). This is true both in standard compilation but also link-time-optimization (LTO) in which the module is (partially) merged with others in the same project at link time. LTO is able to take advantage of this to optimize functions calls to outside the translation unit. Code compiled without LTO for all of its dependencies, however, does not have this information and therefore is unable to perform such optimizations. To remedy this issue, one might propose always compiling programs with LTO enabled. This doesn’t solve the problem for two reasons of practicality: LTO comes with a nontrivial compile-time investment; and many libraries upon which a program could depend, do not ship with LTO information, simply headers and binaries. In this project, we propose solving the problem by generating annotated versions of the source code that also include this derived information. Such an approach has the benefits of both worlds: allowing optimizations previously limited to LTO without running LTO and only providing headers.
Implement game engine developed by S.K.I.F targeting original versions of Red Comrades 1: Save The Galaxy (1998) and Red Comrades 2 : For the Great Justice (1999).
The project aims to add the functionality of Service Accounts in the existing Rocket.Chat application. Service accounts will be an upgrade to regular user accounts, they will have a well-defined purpose. Regular users can subscribe to this account to get information regarding the context mentioned in the description of the Service account.
Mifos/Fineract Chatbot and Adapter 2.0 is next iteration of the project. This project idea aims to provide integration between chatbot and Mifos, that is implemented to various major chat platforms. And some of the platforms on which this chatbot will be implemented are:
Also NLP integration will be improved. Some of the improvements are:
And this project will also provide better authentication solution and session management.
The project is aimed at making SymPy able to convert code from Fortran/C into SymPy expressions. After completion and implementation of this project, SymPy would be able to take a block of Fortran or C code. convert it into a sympy expression, perform all the desired operations and convert the code back to original language source code to be implemented in that language. SymPy already has the features of generating source code for other languages from SymPy syntax. This would allow SymPy to easily read in, alter, and write out computational code. Sympy can easily be implemented in other programs as a computing engine because it's fast as compared to many other libraries and languages for computation. The other objective of this project is to improve Lfortran and implement it as an interactive computational environment in Fortran and a library with an effective API in Python. It also increases the usability of Lfortran. The project can be extended further after the program is over by creating a parser for converting their code in sympy syntax and can also be expanded to process natural language instructions.
Working on the AXIOM Remote hardware
TensorMap will be a web application that will allow the users to create machine learning algorithms visually. TensorMap will support reverse engineering of the visual layout to a Tensorflow implementation in preferred languages. The goal of the project is to let the beginners play with machine learning algorithms in tensorflow without less background knowledge about the library. Expected Results:
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector on the Large Hadron Collider (LHC) ring. It is designed to study the physics of strongly interacting matter at extreme energy densities, where a phase of matter called quark-gluon plasma forms. The new ALICE synchronous data reconstruction facility for Run 3 needs a real-time conditions and calibration data distribution mechanism. New calibration objects are produced at up to 50Hz and have to be propagated to about 2000 servers. For efficient data distribution in this environment a network multicast delivery mechanism has to be used. There will be two sides to be implemented for this project: a library to send the newly produced objects and a caching service to run on each of the 2000 servers to receive and keep in memory the objects, making them available to the localhost running processes via a REST API.
This proposal promises to develop JupyterLab extension, that will allow the users to specify Python modules (and respective versions) via a user interface, making them available inside the notebook cells automatically. This extension would improve the user experience for interactive programming and data analysis using tools like SWAN.
The Salesforce Design System React is a library of react-based components derived from Lightning Design System (SLDS). The project aims to extend and improve the Design System React [DSR] component Library, by porting different components from the Lightning Design System Library [SLDS]. The goal of the project is to help non-UI engineers implement highly interactive web applications with ease, by extending the variety of components that are available to them through the DSR library. Adding more variety of components will also help DSR to be useful for a wide range of applications, and make it a go-to react library.
This project aims to add support for File Loads method of inserting data into BigQuery for streaming pipelines. The PR - #7655 for [BEAM-6553] added support in the Python SDK for writing to BigQuery using File Loads method for Batch pipelines. However, support still needs to be added for Streaming pipelines.
Streaming pipelines with non-default Windowing, Triggering and Accumulation mode should be able to write data to BigQuery using file loads method. In case of failure, the pipeline should fail atomically. This means that each record should be loaded into BigQuery at-most-once.
The JIRA issue for this project is [BEAM-6611].
The aim of the project is to teach Python to people of all ages and background in a lucid, creative and interactive manner, in the form of an interactive course created using the Runestone framework, and integrating the YouTube API and Sphinx Documentation with it. The aim is also to create a massive online community course, so as to teach Python in Spanish. This project has the potential to reach over thousands of students and people of all ages and teach them Python in an easy manner so that they are able to apply the concepts in real life situations, which is of paramount importance. It involves creating an entire Runestone setup and serving lectures, which would be translated to Spanish through Sphinx Internationalization. The quizzes and assignments will be created using Runestone's components, so as to provide highly interactive environments for the people to learn and challenge themselves. The lectures will be accompanied by examples and interactive exercises made using Runestone’s components, which will help in providing a thorough explanation of the concepts discussed in the lectures. Videos will be added to support the content, along with automatic subtitle generation.
There have been impressive advancements made in the domain of 3D reconstruction algorithms that use an RGBD camera. However, there are not many open source implementations of such algorithms. One such algorithm is DynamicFusion, developed by Newcombe et al, in 2015.
DynamicFusion is a 3D reconstruction algorithm that extends KinectFusion to handle non-rigid deformations in the scene. It accomplishes this by estimating a 6D motion field that is used to warp the canonical geometry onto the live frame.
This project aims to bring a CPU-based DynamicFusion implementation to OpenCV as an extension of the rgbd module.
Project Abstract About me Mentors Information Coding Plan and Methods Commitments Timeline
The project sets to add tilling support for the Adreno 3XX gallium driver, figuring out the various layouts for different types of textures, when to use them, and update the use cases. Tiled rendering greatly reduces the required memory bandwidth, which is particularly important for handheld/embedded devices and low power GPUs with restricted resources. This especially important nowadays with the advent of IOT.
I want to make a new Markdown WYSIWYG editor with light-weight render written in C++ and QT for KDE. Support pagination, printing preview, better text rendering, user preference of color theme.
The project is related to Google Scholar profiles and metrics. Many researchers have a Google Scholar profile. It is used by people to see how many papers a researcher has written, how many citations they have received, their h-index, i10-index... But these metrics are flawed... The goal of the project would be to extract information from Google Scholar and compute better metrics about a researcher's performance. And then display this information these better metrics with more stats and evaluations in another website.
This project aims to add traditional machine learning algorithms to the Tensorflow for Swift library. We aim to implement Logistic Regression, Support Vector Machines, Random Forest, Gradient-Boosted Decision Trees and K-Means Algorithms.
The aim of this project is to package additional Android SDK Tools in Debian and to update some packaged Android SDK Tools to the latest upstream versions. As a result of this project users of Debian will be able to install these Android SDK Tools easily from the official Debian repositories. At first these Android SDK Tools will appear in Debian Unstable (version of operating system for developers) repository and then in Debian Testing (for users who want the new software) repository. As a result these Android SDK Tools will be also easily available for installation later not only for Debian users, but also for users of many other operating systems, which are based on Debian Testing (such as Kubuntu, Ubuntu, Linux Mint, etc).
The goal of this project is to write a solid, feature-complete, and reasonably performant implementation of the deshake video filter in OpenCL. This filter uses motion estimation to compensate for erratic camera movement and smooth out shaky footage.
Secondary goals to be pursued if time allows include the addition of interop between macOS's videotoolbox API and the OpenCL filters as well as ports of some of the other simple post-processing filters (such as delogo).
ZAP has good support for websockets, and allows them to be intercepted, changed and fuzzed. However, it doesn't currently support scanning, either passive or active, of websocket messages. Thus, it is necessary to start with an infrastructure that is going to support scans, both active and passive. The infrastructure should handle the addition and removal of plugins, providing appropriate utilities, run in a background thread, may store statistics of scanning, etc. On the other hand, a plugin implements a particular scanning method for a group of vulnerabilities. Script plugin is a special kind of plugin which is used to run scripts which are written by users and consequently processed by different scripting engines. Finally, API is useful for inter-connectivity of ZAP with other applications like ZAP HUD. I proposing a infrastructure which is going to support active and passive scanning. In addition, I am proposing feature which in my way of thinking is essential and plugins which are testing the most critical vulnerabilities.
The Gaussian Mixture Model (GMM) is widely used in computer vision as a state-of-the-art clustering algorithm. This project proposes the Quantum Gaussian Mixture Model (QGMM) for Quantum Clustering. According to the paper, the QGMM outperforms the classical GMM in every aspect of the estimations. Therefore, in this project, we will implement the QGMM supported by the powerful features of the mlpack and run some experiments to see if how fast it trains, how better it models the data, and where the edge cases are in comparison with the classical GMM.
Fact Bounty is a crowd driven fact checking platform. With the growth of the social media ecosystem, came the issues of misleading news and rumours. This creates chaos and distrust among the masses. Fact Bounty tries to minimise this chaos by spreading awareness among people on the authenticity of a news item. In this project, I would like to
I propose to expand the current FrameNet Brasil Web Annotation Tool to allow for users to annotate video, pictures, and audio, in addition to their current ability to annotate text. Multiple of these modalities will be able to be annotated for the same piece of media, both separately and together as needed. This will be especially useful for research into multi-modal comprehension and communication, and will also allow for the FrameNet project to develop new semantics-based applications across many types of media.
Port KDE Connect to macOS with enhanced features, including native notification, context menu "Send-to", etc. This would make KDE Connect the first open-source and multi-functionality alternative for macOS continuity of Apple for Android phones.
The Clang Static Analyzer (analyzer) can discover errors in a code by a technique called symbolic execution. Its core essentially interprets C, C++, Objective C or Objective C++ code, and at several program points allows it’s modules, or checkers to emit reports. Later, it will construct a bug report, that shows how the error can be reproduced. These steps (such as that the analyzer modeled a call to function, assumed a variable is positive, etc.) are collectively referred to as a bug path. Ideally, this contains every information needed to reproduce the error, but is minimal. However, it can contain either too little or too much information.
In my proposed project, I indent to use static backward program slicing to enhance bug report generation. Instead of doing a fix-point algorithm on the control flow graph (CFG) though, my solution would be implemented on the abstract syntax tree (AST). This solution would only be an estimation, so I only propose to add more information to the bug reports, and research whether program slicing could be used for bug path shortening as a followup work.
The Postgres Operator is a project to create an open-sourced managed Postgres service for Kubernetes. The Postgres operator manages Postgres clusters on Kubernetes. kubectl plugins enable extending the Kubernetes command-line client kubectl with commands to manage custom resources. The task is to design and implement a plugin for the kubectl postgres command. My project aims to simplify and ease the usage of postgres clusters using the kubectl plugin. As the postgres operator is capable of many features having a kubectl plugin will ease in running the clusters and understanding the resources way better.
Gazebo ROS packages (gazebo_ros_pkgs) provides a ROS interface to Gazebo simulations for developers to test their ROS code in a virtual simulation instead of a physical robot. As a part of the GSoC project, various gazebo_plugins would be ported to ROS2. Also necessary additional plugins would be added.
Flow Completion Time (FCT) has been the core metric to optimize via scheduling, congestion control, load balancing in Data Center Networks (DCN). As a user-perceived metric, FCT is one of the most important networks' performance metrics, as the users typically want their flows to complete as quickly as possible during their interactions with the network, e.g., webpage download, file transfer. The past decade has witnessed significant interest in studying the FCT minimization in networked systems research, especially in DCN.
Current NS-3 are less friendly to researchers in this domain. If someone wants to simulate flows based on specic flow size distribution traces, or implement the scheduling policies to minimize the FCT, or analyze the FCT metrics in DCN environment, he or she needs to make a lot of changes in NS-3 code base. This project aims to augment NS-3 with missing helpers to support the research in the domain so that researchers would try out with a minimal amount of coding. The framework will supplement the NS-3 ecosystem with components such as Shortest Job First, Multi-Level Feedback Queue scheduling, packet tagging, topology helper, applications and more.
I propose replacing VPR’s XML reader for the architecture file with a schema-generated one. Furthermore, I propose replacing current rr_graph, top.net, route and place formats with more efficient binary formats. I’m planning to use FlatBuffers for top.net, route and place, and a custom, memory-mappable binary format for the rr_graph file.
As an addition, I will investigate possibilities to generate the rr_graph in VPR from smaller pieces of information such as tilegrid.json and tileconn.json.
As seen on Bazel’s Issue Page, Bazel suffers from numerous NullPointerException( NPE ). These NPEs come out at runtime as RuntimeException causing a crash at runtime this leads to a large number of Issues being generated. This could be a result of not handling possible null values not being handled in many cases or outright poor design choices(as seen in many issues due to not handling null OR empty string as arguments while using certain flags).
Checker Framework's Nullness Checker runs as a compiler plug-in, and it issues a warning at every possible null pointer dereference. If it issues no warnings, the code is guaranteed not to throw a NullPointerException at run time.
My project will ensure the significant reduction of these NullPointerException(s) before they happen (by checking them at compile time) rather than relying on users to report them and fixing them.
Pallene, the statically typed sister language of Lua requires a Foreign Function Interface(FFI) to C. As a starting point, there is a need to create a library in Lua that can parse C header files and incorporate a functionality so as to represent C declarations in Lua.
As modern OpenSCAD usage has advanced, it now makes more extensive use of libraries consisting of many files, and the editor features have not kept pace with this more advanced workflow. The goal of this project will be to add different IDE-like features to the integrated text editor thus advancing its usage. This will make the editor more user-friendly and easier to handle large codes. Currently, the editor supports features like syntax highlighting, bracket matching, numbering of lines, besides others. I will work on implementing multi-file editing support to the editor and autocompletion of OpenSCAD keywords.
Creation of build rules that generate API documentation from C and Python source code and further analysis between the use of GTK-Doc syntax or another alternatives. Also work on the existing documentation for some improvement in clarity or formatting.
The idea behind the project is to make Xen hypervisor available and easy to use on beagleboard-x15. This implementation will allow users to experiment on embedded virtualization and in related fields like automotive, critical systems prototyping and processor's resources sharing. It might also provide very interesting possibilities for heterogeneous resources' utilization. The most interesting outcome is the availability of open-source virtualization on top of the open-source hardware.
This project consists of the following parts:
While the current Firefox browser’s Tab Manager works well for single tab features, it lacks flexibility and functionalities to manage and arrange tabs on multiple tabs, multiple windows, and multiple devices. The new Tab Manager Menu will alleviate redundant tabs (which is currently made disabled), add more functions when selecting multiple tabs, create new options to customize between windows and devices (through synchronization), make the tab manager dropdown menu always visible even when tabs are not overflow, and render a small icon of a hovered tab.
Once finished, this project will help Firefox enhance its user experience and give users more power to control the browser’s tabs.
Currently the xapian tests run one after the other. Since most machines these days use more than 1 processor, the tests can be sped up by running them parallely. This project attempts to accomplish the same by implementing a test scheduler which can run tests in parallel. It also attempts some other improvements on the test suite such as the implementation of a test timer to track slow test cases, and speeding up certain slow test cases by using generated databases for them instead of having them recreate one every time.
This project proposes to update the p5.serial library, created by Shawn Van Every and contributed by Jen Kagan and Tom Igoe. The last commit date of the p5.serial library was on November 22, 2017 according to the library’s github repository. Considering its wide user-base, it will be valuable to keep the library updated using the latest es6 javascript features. Additionally, improving the p5.serialport desktop application by developing its user interface as well as the serial console view will enhance the debugging process. The proposal also includes contribution to the documentation of the library by creating a walkthrough tutorial of the full workflow of connecting a microcontroller to a p5.js sketch, which will be a valuable resource for new users.
The Firefox Account platform tracks security information about an account, but does not surface this information in an easily consumable format which makes this project very essential and useful for administrators as well as the users to make better decisions about security. The goal of this project is to provide Firefox Account administrators and users an easily digestible view into the important events that have occurred on an account, providing a way to audit for irregularities.
Currently, Oppia is serverside rendered using the jinja templating engine. This poses some performance issues since each time a page is being loaded, the server has to generate a new copy of the page to be rendered hence, the pages can not be cached.
This project aims at removing the jinja templates used in serving of pages to a more static method, fetch the dynamic content needed for each page via AJAX calls and update the page as needed.
Free digital signage and presentation screen solution with LibreOffice and cheap single-board computers.
Adding multi-class classification to the Moodle machine learning backend by exposing functionality derived from python Tensorflow and the phpml library to the core.
While MAVProxy serves as a fully-functional cross-platform portable ground control station software, the system lacks the presence of a Graphical User Interface making it more difficult to use as compared to other GCS like Mission Planner, QGroundControl etc. Also, the GCS can be improved by the addition of some modules to replace the commonly used cumbersome terminal commands. A major deliverable of this project would be the addition of a UI based parameter handling module with incremental search capabilities. With a recent increase in the number of users, academic researchers and developers working on swarms and multi-vehicle simultaneous control; a new module providing a few high-level commands to all the vehicles or to individual vehicles with a GUI based environment can potentially serve as a good addition to the GCS.
The AcousticBrainz database contains detailed high and low-level information for millions of audio recordings, all of which create an essential for creatives, researchers, and music fanatics alike. Our understanding of audio can be greatly improved through features that focus on similarities between the content of recordings in such a large database. As such, the development of a similarity index between recordings is essential to improving the AcousticBrainz platform and also to the progression of music recommendation engines in related projects like ListenBrainz.
Especially in relation to AcousticBrainz, previous investigations on similarity systems have supported the success of content-based (high and low level data) engines for determining track similarity. These implementations have fallen short since their architecture prevents scalability, ultimately lacking the speed required for use in AcousticBrainz.
With the information gained from previous pitfalls in recording similarity research and the importance of improved efficiency for a long term implementation, my 2019 GSoC project aims to lay the foundation for an AcousticBrainz similarity engine.
Transaction Cost Analysis (TCA) of an investment program is a fundamental framework to pursue its best execution as costs minimization is a necessary condition to achieving it, given investors' objective in terms of return and risk tolerance. It consists of a pre-trade analysis, an intraday analysis, and a post-trade analysis.
Pre-trade analysis is based on forecasting the set of variables that will influence the financial asset price dynamics and defines which strategy is the best to achieve ex-ante best execution.
With intraday analysis the execution of such investment decisions is monitored and potentially adapts depending on real-time observed market conditions.
Post-trade analysis has two main goals, occurred transactions cost measurement and evaluation of executed market orders performance. The aim of our project is to develop the blotter package by implementing two base classes of mathematical models in the algorithmic trading framework, the Market Impact models used in pre-trade analyses and Algorithmic Transaction Cost Analysis models used in post-trade analyses, together with the methodologies for their performance evaluations, comparisons, and parameters estimations.
A python based tool that can interact with GNU Radio block headers written in C++, to automatically parse them and extract information about them such as which getters/setters they have, their I/O signatures, factory signatures, etc. An abstract representation of the block can be created by analyzing the header code using this tool, which would be in the form of a tree, thus making the code readable in a hierarchical fashion. Then, the abstract representation will be parsed for any further use such as creating YAML files for the GRC. GRModtool can be extended with the parsing tool as one of its utilities.
Repository for Chat SUSI, Skill SUSI, Account SUSI contains common components like Account Settings,Top Bar, Login, Sign Up, Logout, Forgot Password, . I intend to make a set of common components which can be used, instead of duplicating code(lot of redundant code is present, currently). For styling 1 common approach should be used, like Styled-components with SCSS. A predefined theming functionality can be implemented, which can change the colour of the whole SUSI Chat platform, on selection. Enhance Admin section by adding Review Skills, Set Settings, Reported Skills tab, improve code. Add more functionality to settings for users. Make the whole application mobile responsive. Add functionality to private skill, save skill draft will be integrated. All the above features will be integrated on frontend as well on backend.
A tool for the Managers and the contributors to proposal/edit/approve changes as per fedoras change process.
OpenIoE is an Open-source middleware platform for building, managing, and integrating connected products with the Internet of Everything. It is an association of individuals, procedure, information and things. Furthermore, it is also portrayed by sensor gadgets to distinguish measure and offer data for all associated open or private systems utilizing standard and restrictive conventions.
The aim of this project would be to complete the basic gameplay mode which would be a fast-paced Capture The Flag set up in the Dark Fantasy world of Light and Shadow where you have to choose between the game’s two major factions and try to capture opposition's flag. The gameplay will also include features like in-game shop and magic wands.
In this project I propose to include a tool to the Red Hen Lab's Art pipeline based on Gradient Activated Class Map which can be used for understanding which features are being the paramount one for a particular task, say Emperor bust vs Non-Emperor bust classifier. The tool will not be a silver bullet but another tool in the arsenal of tools, that will be used to study features in tasks like classification, detection, matching and captioning.
OpenStack Manila manages shared file-systems across the cloud. Being able to create and access them with ease from the container world is showing to be quite useful - that’s what csi-manila is for. One of the features that’s also in high demand when dealing with shared file-systems is taking snapshots as well as creating new shares from those snapshots from within Container Orchestrators like Kubernetes. csi-manila itself is quite a new piece of software and is missing certain features, like snapshots for an instance. This GSoC project will try to close this feature gap.
I will implement a Migration Assistant component as specified in the GNOME Wiki. Migration Assistant will first be implemented as an independent module and will be integrated directly into GNOME Initial Setup once it's polished. The final product will be able to migrate user data, Flatpak applications, network settings, and more over the network.
The design will initially be based on the mockups in the wiki, and I will use Python 3 to implement this module.
Various Deep learning architectures such as deep neural networks and recurrent neural networks are applied to fields including computer vision, speech recognition, natural language processing, , drug design, medical image analysis, reinforcement learning etc.
One of the interesting application of deep learning is Generative Adversarial Networks (GANs) which aim at learning the true data distribution of the training set so as to generate new data points. Since the day it was introduced in 2014 it has been used in various applications from Image Synthesis to synthesizing DNA sequences.
This project aims to provide flexible and extensible implementation of Least Squares Generative Adversarial Networks(LSGANs), Label Smoothing, Bidirectional GAN(BIGAN) and Stacked Generative Adversarial Network (StackGAN). In addition it also aims at providing techniques to measure GAN performance like Inception Score(IS) and Fréchet Inception Distance(FID).
There is small improvement required by MIT App Inventor to include a new component that would help in to inflate Menu Items in the app rather than using numerous buttons on the main area of the screen. This would actually enhance the UI of our designed apps and would be very user friendly. I think Toolbar is the best option to inflate menu items as we can customize it according to our wish and it would contain all the necessary basic items in it.
As the student who took the initiative together with the mentors Jeremiah Foster and Gunnar Andersson during GSoC 2018 for Voice Command on IVI Systems, we were successful in developing a conceptual approach called “VCIVING” and implementing a voice command and response system on PCs. This was called “EmulationCore” and was capable of playing music and finding a location on the map based on user’s speech. Somewhat primitive methods were adopted to refine exact information from the user’s speech those of which have a greater possibility to fail. I intend to work on enhancing the speech recognition and the implementation of VCIVING on GDP.
Developer Web Interface for ReactOS will be a single Interface which can address all developer needs from easily watching commits to triggering tests and viewing test results. The interface will be having all details regarding the GitHub repository of ReactOS ranging from commit history to pull requests with the ability of filtering. Moreover, the interface will allow the developers to view the build and test results with details by taking the help of existing ReactOS web tools
This project will implement a new dataformat within Apache Camel. Concretely, the dataformat will cover the microformats standard through the well-known library Apache Any23. This project will open up a new spectrum of possibilities for Apache Camel users in the context of Semantic Web technologies. Moreover, the implementation of this module will pave the way for other standards such RDF, Linked Data, RDFa, etc.
ModelPolisher is a model annotation tool for the BiGG Models Knowledgebase. Annotations enhance the reusability and interoperability of biological models. This project will improve ModelPolisher by extending its annotation capabilities for models lacking BiGG Identifiers. Also, option to produce separate glossary file and embedding them in COMBINE archive will be added. Further ModelPolisher will be containerized to simplify setup of its database back-end and the project will be updated to software development standards.
Godot is an Open source Game engine which is catching the eye of many indie developers recently. It is an awesome tool! Motion Matching which was introduced by Ubisoft in GDC 2016 is a pretty new and powerful concept in the industry. So, it quickly got its implementations in many popular game engines like Unreal Engine, etc. The main goal of this project is the implementation of Motion Matching in Godot.
In recent years, generative adversarial networks have proven to be very effective for training generative models and hundreds of different variants have been introduced. This proposal aims to implement novel techniques for training GANs efficiently such as mini-batch discrimination, virtual batch normalisation and additionally, adding support for Conditional GAN (CGAN). Future work for Stacked GAN (SGAN) is also proposed.
NetBSD provides a number of hashing algorithms related to local password management. Algorithms cur- rently provided are DES, MD5, SHA1, and Blowfish. This project seeks to augment the existing system with the Argon2 hashing algorithm, a modern, memory-hard algorithm and winner of the Password Hash- ing Competition (PHC).
Loomio is a decision-making software designed to assist groups with the collaborative decision-making process. It is a free software web application, where users can initiate discussions and put up proposals. More details on Loomio can be found at loomio.org. By packaging of Loomio for Debian, we could enable easy installation of Loomio on Debian machines. It would foster collaborative decision making entirely using free software.
I’ve been using Loomio for a while and helping other fellow Debian Developers like Pirate Praveen, Sruthi Chandran, etcetera with some decision making and other things.
Loomio is mostly written in Ruby, but also includes some CoffeeScript and JavaScript. The idea is to package all the dependencies of Loomio and get Loomio easily installable on the Debian machines.
Django currently has a formset class which is used as a collection instance for the forms. It’s a layer of abstraction which makes it easier to work with multiple forms.
A free alternative for Amazon's Rekognition API service
One of the selling points of new LLVM ORC Concurrent JIT APIs is we can speculatively compile functions before we need it, with the hope that when we call it in run-time it is already compiled. However, if we speculatively compile the whole module and its transitive dependencies we’re quickly going to overload the CPU/Memory resources and increase the start-up time of the application. To avoid this, we can select the functions ahead of time which are likely going to execute next and compile them speculatively, using runtime profiles from previous app executions and/or static program analysis.
It helps to use leverage the performance of Just-in-time compilation using modern multi-core machine. It also helps to reduce the JIT compilation latency.
This Project aims to improve the Beaglebone Black (BBB) Board Support Package on RTEMS. The project intends to add HDMI Framebuffer support for the BSP. Currently, the users who use RTEMS on their BBB project, can’t use an HDMI display to their project due to the lack of a framebuffer driver in the BBB Board Support Package. The project is mainly focused on adding the Framebuffer driver, this will enable the users to attach a display with their BBB project that uses RTEMS. The end product of the project will be a graphic console. Once the framebuffer driver is working and a console can be accessed through the HDMI display, it will open the way for further addition of graphics libraries, like the mouse support.
Art and graphics form an integral part of most explorations on Oppia, and are especially important for making lessons learner-friendly and communicating key concepts to learners, especially since good graphics can transcend language barriers. However, one major blocker when creating lessons is the lack of an easy way for designers/artists to add images to lessons. We would like to enable artists to contribute to lessons as easily as developers can contribute to GitHub repositories.
Liquid Galaxy for Education is a project which consists of bringing Liquid Galaxy to children by using Chromebooks as screens and a tablet as a controller. By that this project aims to introduce children the benefits of Liquid Galaxy and a completely new learning method for subjects like Geography and History.
The Liquid Galaxy for Education project has three parts:
This project works on providing an all-rounder user interface that touches every aspect of rclone. The goal of this project is to make a streamlined and easy to use interface that provides a non-tech savvy person with the power of rclone and its cloud sync functionalities.
Mission Support System currently relies on the python basemap package for supplying non-cylindrical projections and plotting of geographical features with the use of EPSG Codes. This package has been deprecated and been supplanted in the community by the cartopy package.
In this project I will first completely change the existing code and replace the relevant parts by Cartopy, further optimization might be needed after which support for more geographical projections will also be added as mandated by WMS Standard.
cBioPortal utilizes a Spring MVC architecture with MyBatis for the persistence layer and a relational database (MySQL) for data storage. As the number and size of cancer datasets increase, high-performance computing and storage will only become more vital in providing an adequate cBioPortal user experience. The primary goals of this project are to use Spark and Parquet to improve the performance of the existing web APIs and to provide a high-performance computing platform for future development.
With over 3 billion total downloads from 2005 until now, VideoLan's VLC is one of the most popular and globally known free video players. Yet, unfortunately, that tale is not told on the iOS App Store. Despite VLC's myriad of amazing and useful features, it is currently ranked in the bottom 30s of video related apps on the iOS App Store. This project aims to continue and complete the modernization process of the VLC iOS user interface. Additionally, during this project, I aim to refactor critical screens and also optimize UX designs to enhance the users’ experience,thus encouraging greater user retention.
GANGA (Gaudi/Athena and Grid Alliance) is an interface used by scientists to interface with huge amount of computing power and storage available to them as part of the LHC computing grid. Ganga provides a simple yet powerful interface for submitting and managing jobs to a variety of computing backends. The project aims to evaluate the CPU usage, Memory Usage and Persistent Memory usage by Ganga framework. It further aims to reduce the memory consumption by Ganga when executing jobs. FInally it aims to implement a new persistent modal that will store the metadata in more compact form thus reducing both the memory consumption as well as time taken to read the data.
The ingester is a stateful component in the Cortex ecosystem that builds Prometheus chunks from incoming samples. In order to distribute load, a Distributed Hash Table is used to route requests to different Ingesters. The current implementation only allows users to scale up their ingester pools by 1 Ingester per 12 hour period, which is not great when load changes dramatically. This project will be to improve how Ingesters hand over their data when they are being created or deleted in order to easily scale.
Bloom is the release creation tool for the ROS ecosystem. From a standard package manifest format bloom can generate debian or rpm metadata for building binary packages on different Linux distributions. In order to bring support for modular binary packages Windows, this project will refactor the existed generator in Bloom code base, then extended Bloom to generate Vcpkg's metadata for building package on Windows.
Application of ANN algorithms implemented in mlpack.
Working on librarifying llvm-objcopy for use throughout binutils. Creating a good base for a library such that many object file formats can be added later. I think we can create a much better and more ergonomic object file library than GNU's bfd considering we can use it as an example. Also, I find C++ is much better suited for creating ergonomic and easy to use libraries than C, obviously. If the library exists internally for LLVM and by use of its binutils, then we can create a much better library. If done correctly, many of the binutils will just have to interface with the command line and with the library and do little work with the underlying object file on their own.
The goal of the project is to provide production ready, autoconfigured service that will be using all the AeroGear Voyager Server functionalities and will provide a command line tool for building and extending GraphQL based Node.js server. The client will help with developers getting started by generating standalone Node.js Server that can provide out of the box features for developers looking for RealTime DataBase capabilities and easing development effort required to build and deploy functional server to production.
Testing software is essential. As a project gets bigger, the difficulty of testing the interactions of all its components increases. Property based testing, with libraries like QuickCheck and quickcheck-state-machine, seems to counter this problem. In this project we will use these libraries to test stateful code, with a main focus on parallel programs and programs with injected errors. The target is to benefit the whole Haskell community by providing guides, template examples and improving the api and functionality of these testing frameworks.
Notification system mainly consists of two parts, In-app Navbar Notifications and Web Push Notifications.
-In-app Navbar Notifications: Currently, notifications regarding likes/comments on a post for a user are sent out through emails and there is no notification dropdown on the navbar. I will be implementing a notification feature for each user on the navbar using activity_notifications gem.
-Web Push Notifications : I will build a system on publiclab/plots2 for triggering Web Push Notifications for desktop when new comments/likes or posts are created using ActionCable.
Enhancing the User experience by improving website response. This will be achieved by making backend queries faster, improving user interface, adding features for specially abled users etc.
Advances in RNA sequencing technologies have revealed the complexity of our genome. Long non-coding RNAs (lncRNAs) make up the majority of the non-coding transcriptome. Understanding the significance of this RNA world is one of the most important challenges faced in biology today, and the lncRNAs within it represent a gold mine of potential new biomarkers and drug targets. Its discovery is still at a preliminary stage. To date, very few lncRNAs have been characterized in detail. However, it is clear that lncRNAs are important regulators of gene expression, and lncRNAs are thought to have a wide range of functions in cellular and developmental processes. There are many specialized lncRNA databases (like RefSeq, GENCODE, Ensembl, SGD, tair). We will use Machine Learning techniques to highlight and compare two sets of calls (of Ensembl / GENCODE and RefSeq) and determine which calls are incorrect. Goal of the Project: Implement a machine learning model (a 2nd pass filter) which will predict / validate credible calls (true positive/false positive cases) produced by RefSeq and GENCODE (or Ensembl).
Differentiable Programming is a programming paradigm in which we can differentiate through the program itself. It allows us to exploit the knowledge already embedded in a problem and use existing deep learning techniques to it. This project aims to develop a Ray Tracer in Julia and interface it with Flux/Zygote for Automatic Differentiation support. Also we shall demonstrate the use of this Ray Tracer in downstream Reinforcement Learning Tasks.
Project aims at updating existing functions to make them compatible with Tensorflow 2.0 and adding new image processing operations to make the overall processing faster as in the earlier case image processing was done with other library and then the data is used for training.
This project aims at adding custom rules for the oclif in TypeScript and Javascript such that other developers can create their CLI without any hassle and find the small bugs and errors easily and effectively. It will reduce the time for the bug fix due to functional errors in code. Linter is used to enforce coding standards throughout the codebase
This plugin draws the desired types of nodes and arrows on a non-visible canvas. These dynamically drawn images are created in the browser as Blob type images and visualization of ccNetViz as texture.
These pre-prepared shapes will be added to the plugin.
The library will be able to draw the desired node and arrow graphs without compromising performance. Also, the plugin doesn't affect the core library still keep the core of library lightweight.
With this plugin ccNetViz can:
With these developments, settings will be prepared and documented to enable the user to use them easily.
GCC has support for built-in functions in C99/C11 standards along with features of IEEE standards. These functions do appropriate calculations according to the requirements of the users taking various types of arguments and returning values as required on supported data types of GCC like int, float. Such inbuilt functions help users, developers and GCC developers to minimize the repetitive calculations and efforts. There are optimization steps carried out to have faster compilation and running time of programs, folding and inlining being one of them. The purpose of this project is to implement more such functions in GCC which are added in ISO/IEC TS 18661 (supporting features of IEEE 754), folding them and expanding them inline wherever appropriate.
Developing an open source parser for PLUTO that can read PLUTO scripts as input and generate valid Python 3 code from the script.
--abort and --continue flags which help the user to do so, however, the downside of this is that the user needs to remember the last command that he used. This project is all about implementing generic hg continue and hg abort commands which will automatically scan for the command that is being currently in conflict. Furthermore, this will provide with the functionality for extensions to plug in their logic to abort and continue the operation.hg update --abort which will add the logic to revert back from an update after the merge conflict has already occurred. This is one of the most requested features in mercurial which has not yet been resolved.An attempt at implementing this feature https://wiki.gnome.org/Design/Playground/Games/Snapshots
Intersection problems are common in many engineering applications, for example, enforcement of non-penetration constraints in contact mechanics, cut-cells in immersed boundary methods and lattice generation in additive manufacturing, etc. For solving intersection problems an efficient search structure is important. The CGAL library offers the AABB tree structure for fast intersection computation. The k-dop whose facets are determined by hyperplanes with outer normals in a fixed number of directions is a generalisation of the bounding box, and it is promising to be more efficient and flexible than the AABB tree in the intersection search. This project aims to develop a k-dop tree structure in CGAL, and its robustness and efficiency will also be investigated compared with the AABB tree.
This project helps ReactNative developers to easily bring neural network into their mobile applications. They could use off-the-shelf models from tensorflow or use pre-trained models and transfer learning with their own data to fine tune models for their own specific applications. And if the ReactNative developers are proficient in neural networks, they could even build and train their own custom models from scratch. This project would provide the community with a starter project to understand how these two frameworks play together and provide them with a complete end-to-end example.
Mission Support System is a flight planning software which a researcher can use to analyze predicted atmospheric data, and plan a flight-path with 3D way-points. The software in the present state allows editing by a single user per flight-path. To share this work, one has to export the work as a $name.ftml file and send it to other researchers for further planning. This back and forth communication not only consumes a lot of human efforts and time, but also can be frustrating when the number of researchers involved in a project is bigger, say >=3.
I propose a solution to this problem, the development of Mscollab which stands for "Mission Support Collaboration". Mscollab server would facilitate real-time, collaborated editing of flight-paths by authorized users. By design, it will also provide a chat facility for the users who are collaborating on the project. Its UI would be a part of msui, the core User Interface module of mss. It would additionally provide insights about temporal changes related to waypoints and the users who created them, for analytics purpose. Mscollab-server will be a standalone server built with Python, Flask, and python-socketio.
The current way of interacting with the Popper engine is through the Popper CLI tool. This tool implements all the features of the engine including execution, scaffolding, CI integration, etc. A project that wishes to use the Popper engine has to use the CLI tool. I propose separating Popper into two parts in order to make it easier to use the workflow execution engine in other projects: First a library with all the execution engine itself. Second, a CLI frontend that implements the aforementioned library. With these changes users will be able to extend Popper by creating new ActionRunner and Workflow subclasses. In addition I wish to implement a REANA workflow engine for Popper, this will allow Popper workflows to use all the features REANNA has like kubernetes, workflow specifications, shared storage systems, container technologies, etc. This will also expand the REANA ecosystem and give existing users of the platform another way of defining workflows.
Awesome Game Demo Project game play is said to be Lost Journey, that was inspired by the Game Lost Journey. Game consist of Actor Girl, where she lost her memories, and the Game play will make her to collect memories by tackling some struggles and solving the fun puzzles, to unlock her Gifted Memories.
We already have some physical mainboards for AArch64. However, we don’t have the support of QEMU/AArch64 yet. ARM-based chips nowadays become popular especially in embedded systems so this movement would spread to 64-bit ARM architectures. Supporting QEMU/AArch64 in the coreboot project would be helpful for developers with compatibility testing. It also would help to make sure that changes to architecture code don’t break current implementations.
The goal of this project is that we can use the following command:
$ qemu-system-aarch64 -bios build/coreboot.rom -machine virt -cpu cortex-a53
Every year more than 100 students apply for GSoC under the umbrella organization PSF. Currently there is a multi-user blogging website using WordPress CMS hosted for the students to publish their weekly blogs and a static landing page for reaching out to people for more information about this program.
This project aims to build a platform which allows smooth management of the GSoC program at PSF every year and also ties everyone associated with it to PSF, so that their work or they themselves can help out others in future.
settings.scss for :css to scss.The broader category of logic I’ve decided to work with is modal logic. With the guidance of Dr Ben Blumson, I was able to decide on which topic(s) precisely I want to work on. The debate between essential and accidental properties of objects dates back centuries in western philosophy. It is such a crucial concept that it is discussed numerous times in the world’s most ancient philosophy, Hinduism, multiple times. I plan to formalise different version of Salmon's arguments for origin essentialism. For each version, I will be finding out the problem it faces and subsequently solving them in the next version. Majority of the theory is summarised here. Along, with solving the Recycling and Generality problem, I will also take on the Tolerance problem by not reducing to a version of the Sorites paradox but rather making changes in modal logic S5. I will build on the work of Salmon and Lewis who deny transitivity between worlds.
Aim is to develop a Page Builder interface for frontend part in styles section using VueJS, with a Drag and Drop GUI for module positions. Using Bootstrap Grid System to make it easier to design web pages which would be responsive on all screen sizes. User will be able to fully customise the site template style, using interactive area to
col-sm-3 to col-sm-4.Updating "params" column in #_template_styles and implementing page renderer in index.php to preview the style based on JSON input from "params"
The task is about adding RETURNING option to INSERT clause in MariaDB server, that returns a set of the changed rows to the client. This feature already exists in other DBMSs such as PostgreSQL. So having this in MariaDB might improve compatibility.
Currently, this project is targeted towards the fourth edition of the book. There are plenty of APIs which are yet to be implemented in the AIMA4e branch. To be precise, the Problem Solving section is already been implemented. Though there are subtle changes and minor additions which are yet to be done in this section. The "Knowledge, reasoning and planning" section and “Uncertain knowledge and Reasoning” section was partially implemented last year. Implementation of the “Learning” section is not initialized yet. Therefore, as a part of my GSoC proposal, I plan to implement the Learning section of the textbook. Depending on the advice from the mentor, I would also be interested in making APIs for the Knowledge, reasoning and planning and Uncertain knowledge and reasoning sections of the textbook for AIMA4e branch. I also plan to add the explanatory notebooks for these sections. I’ve already worked on some notebooks before, hence would love to contribute to the same.
Besides this, I plan to write the tests for updated algorithms of the fourth edition and demonstrate these on the various problems as done in the demo package of aima3e branch.
This project is a Jira scanner for jQAssistant. It enables jQAssistant to scan and analyze Jira projects.
Perma.cc is created with a aim to tackle the increasing incidences of link rot. It does so by archiving copies of the linked resources and providing a permanent URL for them. The main aim of this project to integrate this functionality provided by perma.cc in lumendatabase. Perma.cc provides a API for its services and this project aims to integrate this API with Lumendatabase along with the necessary UI and automated tests for the implemented functionality.
The main aim of this project is to develop a tool in order to generate highly trustable RDF triples from given abstracts. In order to develop such a tool we are to implement algorithms which would take the output generated from the syntactic analyzer along with DBpedia spotlight’s named entity identifiers. Ultimately we shall use this tool to apply it on the existing DBpedia abstracts in order to generate a new .nt file as a dataset.
The PVRDMA device enables virtual machines to use RoCE without assigning a physical device or a virtual function. It does not need the whole guest memory to be pinned and can support live migration. This project addresses the latter point. While the PVRDMA device can be used in a hybrid environment where the nodes can be a bare-metal machine or a VM, this project aims to enable live migration when all the nodes are VMs. The above assumption allows a relatively easy approach by creating a QEMU protocol for broadcasting/receiving notifications during live migration. Since RoCE uses Ethernet at the data link layer and QEMU already supports live migration for emulated Ethernet devices, the project will concentrate on passing the device state from the source to the destination using the protocol mentioned above.
This project aims to provide an API for different programming languages to load/unload firmware and communicate with the PRUs from User Space. A GUI application that runs on the terminal will also be provided for debugging the PRUs step by step.
During the updating, some location-renamed extensions, which change the XWiki doc location in the new version, need to migrate its document, XClass, XObject, XProperty into the coordinate new version via Data Migration Framework. In my project proposal, two new migrators named XProperty Migrator and XObject Migrator are created in the Data Migration Framework to migrate property and object respectively. Three script services called Dependency Detect, Migration Creation and Mandatory Migration Trigger are provided to make sure extension executable, create migration descriptor and manually apply migration respectively.
Throughout the Summer I will be devoted to contributing and working on various modules and making sure that it complies with the terms and standards and also that the timeline mentioned below is strictly followed as well collaborating with mentors for an effective output. In this summer, I would be working on wide range of functionalities, again following the timeline. I would be working on • Incoming Notifications • Support/Chat messages • Integration with Pockets API • Adding Language Support • UI for Forgot Password Component • Enhancing the UI of the web-app • Following the guidelines of the mentor assigned • Building the modules assigned to me
As per the description provided, I have noticed the version 2.0 was built last year. After reading the description and going through various other projects under the same organization “Mifos initiative”. I realized that the main focus on the app was ease of use as well automation, so that our primary users could use it in a flexible way.
The current web interface of OWTF is non-functional and some of its pages are not yet implemented. This project is about implementing a full functional and responsive webui for all the pages of the app written in ReactJs with Redux as its state manager. The project also includes introducing new features in the app with refinement of the current layouts ensuring excellent reliability and performance. Implementing an automated testing environment with a good Unit/Integration test coverage is also an important part of the project. Project also demands adding Typescript to the app (if time permits) to eliminate a large no. of errors from the code.
We can have SaaS and Paas with Docker/Kubernetes and try to have infrastructure as a code while having the entire infrastructure connected and orchestrated to perform tasks together. Create a pipeline and raising the infrastructure using circleci or travis while have the pipeline ready for continuous deployment and continuous integrations. We can even us Packer for creating AMI’s making it better to have a better OS snapshot having our own docker image in it. Again, GN3 could also be used for simulating the network where as ansible playbook could be used to have our network configurations automated.
With GTK 4 around the corner, supporting it in Rust allows more developers to fully utilize Rust in GNOME development with its rich feature and safety guarantees. To support GTK in Rust, the gtk-rs project has provided great tools for creating bindings using the glib object model. This project will utilize the tools from gtk-rs and extend GTK support to its newest version. As the GTK toolkit contains various components, this project will aim to address them in the following order:
In addition, this project will port GTK Rust examples to GTK 4, as a validation to the work done in API bindings.
The project aims to extend the capabilities of the control daemon so as to provide smooth communication between the Graphical controller the Web Server and the Control Daemon.It focuses on moving the functionality from Shell scripts to Control Daemon
A crucial component of the user experience with bots in Rocket chat android app is rich messages. Currently, there is only minimal support for rich messages in the Android app with the only support for Horizontal Buttons, Multiple Buttons, Image Buttons with URL and Buttons with URL.
The goal is to improve user experience with bots by:
Almost every web browser features the ability to sync data (like bookmarks, history, saved passwords, etc.) between different installed instances on different devices. This feature helps users pick up on other devices where they left on one device and is considered a very important feature that a modern browser should have. But Falkon still lacks that feature. This project aims to make Falkon have that ability of synchronising the browser data to another installed instance of Falkon probably on another device.
The PyOpenWorm data management tool aids in creating, storing, and sharing information about Caenorhabditis elegans and about the evidence supporting it. This initiative will focus on providing a primary source of distribution for that information. This specification will integrate a Bit-Torrent client into the existing PyOpenWorm codebase enabling researchers to transfer data sources. This will provide OpenWorm Foundation and the larger neuroscience research community with a peer-to-peer file sharing framework with the ability to limit access to sensitive information and also protect against malicious changes. It will pave the way for easier sharing and collaboration, ultimately leading to a better understanding of C.elegans.
The human eye is more tolerant towards errors in areas of high activity and is quick to find out errors in areas of lower activity. To leverage this psychovisual characteristic, quantization can be made adaptive based on the activity mask. Activity masking will be implemented in two phases:
Using Trellis Quantization passing the activity measurements as the weights to the trellis. The output of this Activity masked Trellis is used for quantization. This helps the quantization perform better in PSNR metrics while increasing perceptual image quality significantly. This is a feature that has proved to perform better in the case of x264 and is expected to yield similar results at rav1e too.
There are various Wikimedia projects that are edited by volunteers around the world. Hashtag Search is a tool that allows users to search for uses of hashtags used in Wikimedia edit summaries. As of now, the functionalities of the tool are quite basic - Few simple search options are provided and the results are listed with the option to download them as CSV. Many Wikimedia campaigns uses this tool to track edits and the users might want to get into more details for a particular search. This project ‘Create a Subpage for statistics and charts related to a hashtag search’ aims at creating a page which would display more detailed statistics, charts and graphs for a given search. Users can also optionally download the detailed data as CSV.
This project aims to investigate application of ZSTD within the ROOT framework; benchmark it in comparison to the other algorithms; test it against real LHC data files; and investigate schemes to integrate dictionary-based compression in ROOT files. This will require careful analysis, research and benchmarking through all the stages of the project in order to ensure that the proposed changes are properly verified, documented and result in well-defined benefits for ROOT.
Edward Moore's Algorithm is an improvement over Bellman-Ford's algorithm and can compute single-source shortest paths in weighted (including negative weights) directed graphs. It has an average running time of O(|E|) on random graphs and the same worst-case complexity as Bellman-Ford's algorithm of O(|V| x |E|).
Boost::Breadth First Search is the implementation of the classic Breadth First Algorithm in the Boost Graph Library. It is a basic graph traversal algorithm which can be applied to any type of graph. One of its various applications is to find the path with minimum edges from a given source to an arbitrary destination. It has a linear time complexity of O(|V| + |E|).
Binary Breadth First Search is a modification of the standard Breadth First Search algorithm and can calculate single-source shortest paths in a weighted graph where weights of all edges are either zero or some constant C. It has a linear time complexity of O(|V| + |E|) compared to O(|E| + |V| log |V|) of Dijkstra’s algorithm.
I propose to add the above three algorithms to pgRouting during the GSoC period.
ELI5 is a library for explaining machine learning models. However, it has yet to add support for neural networks. Thus, it is proposed to add explanations for models from PyTorch, Keras, and Tensorflow. In particular, the robust and interpretable Grad-CAM technique will be used to produce visual explanations of neural network predictions, especially for image and text-based tasks. Using Grad-CAM, the parts of input that contribute the most to a given prediction are highlighted. In the case of images a heat map will be produced, and for text the individual words and characters will be highlighted.
Using VNET in FreeBSD jails, the root of the jail can set IP addresses of their will, however, sysadmins may need to limit these privileges for different purposes. With a MAC framework, the root of the host can restrict root of the jail to set the desired IP address. Currently, there is no MAC policy module for such restriction, implying these rules are written in the kernel itself. The project is focused on writing a MAC module for The TrustedBSD MAC framework to enable easy management of privilege(configuring the network stack) restriction of jail.
Features this new MAC policy module should include are-
Also, this mac module should have support for both IPv4 and IPv6 addresses.
Hello. My name is Rucha Deodhar. I would like to contribute to MariaDB during GSoC 2019. More information can be found in my draft of proposal on the attached google doc. Any feedback would be appreciated. Thanks!
PANOPTES Observatory Control System (POCS) use different hardware components and scheduler which are closely connected to main Observatory class. Therefore, it makes whole architecture not flexible, and it's problematic to make dynamic adding or removing components. In order, to fix this issue the Dynamic Inversion principle will be used to modify whole architecture for flexible usage of the system.
We will develop an R package for two families of skew-t distributions that have different tail behavior for left and right tails, namely the family of the asymmetric t-distributions (AST) distributions introduced by Zhu and Galbraith (2010), and the family of generalized asymmetric t-distributions (GAT) introduced by Baker (2018). The importance of these two families is that they go beyond the symmetric tail behaviors of the skew-t distributions, as described in Azzalini and Capitanio (2014), and hence can provide better fits for certain data arising in applications, especially for asset returns. The resulting skew-t package st, will compute not only the skew-t MLEs but also basic computations of the skew-t probability density, cumulative distribution functions. Furthermore, the package will compute confidence intervals for the parameter estimates, and hypothesis tests concerning the parameters. Due to the complexity of the AST and GAT distributions, and the desire to use them in empirical asset pricing studies with large cross-sections (e.g., 1000 to 5000 stocks), we plan to use the Rcpp package to integrate C++ code to obtain high-performance MLE computations.
This proposal aims to add support to include labeled and versioned dataset from the same source and also to build an efficient storage system that can store and maintain data efficiently.
There are several applications where we need to work with ordered data (e.g. Time-series). This includes financial data, climate data, radio signals, genome data in bioinformatics etc. Often it is of interest to find patterns or structure in this data and to detect abrupt changes in structure to model the data effectively. There are a class of dynamic programming algorithms that can solve the optimal partitioning problem and find optimal segments given a statistical criteria.
A quadratic time algorithm described in https://link.springer.com/article/10.1007/s11222-016-9636-3
is one such algorithm that solves the optimal partitioning problem given square error loss function.
Currently there are no R packages available that provide a reference implementation to this algorithm.
This project provides a C++ reference implementation to this optimal partitioning algorithm in an R package (opart) using square error loss function which is easy to understand and can be modified to support different loss functions when developing other change-point models.
The project is focused enhancing current "stream settings" features. Project also includes stream settings code refactorization and improving user experience with stream settings.
As Intermine is providing integrations with different biological data sources, Intermine has to work with different file formats to read data for the different data sources. In bioinformatics, many file formats are used for storing DNA and protein sequence information. Each file format has a formal specification and parser can be made by following the specification rules, But as most file formats are just simple text file that can be easily edited by anyone so its quite easy to make mistake and people may not comply with the standards while creating/editing file, that can lead to errors while parsing/or using that data.
This project’s Idea is to create a standalone validation tool/library that can test the validity of a single or multiple biological file format. The project will provide full control over how one wants to perform validation, a user can use the project’s rich API to customize the behavior of the library. It will provide a generic validation library that can be further extended for any number of file format, A user can even provide its own validation implementation if he/she customize the behavior of the library.
Currently, Android uses its own annotations that are similar to some in the Checker Framework. Examples include the @NonNull, @IntRange, @IntDef, and others annotations. The goal of this project is to add support for Resource and Thread annotations in checker framework for Android. Then, do a case study to show the utility (or not) of pluggable type-checking, by comparison with how Android Studio currently checks the annotations.
To develop String Processing Utilities for boosting mlpack library to manipulate string data types and to convert it into numeric datatype to apply machine learning algorithms.
Since most of the algorithm in mlpack have been written to handle numeric data type and as such it becomes important to convert string data type to numeric. By providing string utility function, we can convert string to numeric datatype and then can use a machine learning algorithm on it.
Also, data preprocessing is an important step involved in machine learning field to achieve high accuracy, and by providing string utility function, such as removing stop word, or removing punctuation, cleaning of string dataset could be achieved.
I have mentioned in the proposal, which function I would implement and how I would implement it and also have included the timeline. I have briefed about my background and have also added the relevant course work to account for the knowledge needed to build about the project.
I have raised pull request for many function along with the test, which are on review as of now. I would continue to sought out all the issues and looking forward to contribute to the organization.
SWAN (Service for Web-based ANalysis) is a cloud data analysis service developed and powered by CERN that provides Jupyter notebooks on demand. It is based on Jupyter upstream technology but it is deeply integrated with CERN-specific services, e.g., EOS, CERNBox, CVMFS. The project aim is to create a testing framework for both upstream Jupyter components and SWAN-specific components which will allow the addition of new tests to cover new features of the SWAN service and would be self-contained and distributable by means of Docker containers. The testing framework should include functional tests, regression tests and performance tests.
A version of the Linux Kernel's XDP framework which will support the execution of Lua scripts as well as eBPF programs
The Mifos mobile application is all built on top of the Apache Fineract 1.x client-facing APIs to enable self-service channels for clients to interact with their own data and transact in a self-service manner. The project aims at improving the application version 3.0 to version 4.0, enabling a member/client in having a self-service channel allowing them more direct control and visibility into their financial livelihood.
For my Google Summer of Code project proposal I suggest implementing the technique described in the paper “Practical Path Guiding for Efficient Light-Transport Simulation” by Thomas Müller, Markus Gross and Jan Novák. The authors describe an iterative process to learn and approximate a scene’s spatial and directional radiance distribution in a tree structure they call SD-Tree. The learned approximation is utilized for path guiding, i.e. the importance sampling of incident radiance for intelligent path construction.
At the end of the project nftables’ feature set should very closely match that of iptables. This project has the objective of implementing all the missing features. These have been further split in sub-tasks 1.1 to 1.5 already by the netfilter core team, and this proposal draws upon them.
Each of the sub-tasks is a self-contained feature. Sub-tasks 1.1, 1.2 and 1.4 should be completed in a week each. However I have allocated a bit more than that for incidentals. I expect sub-task 1.3 to take a bit more than a week.
There is an additional sub-task 1.6 defined (“Rework Netfilter logging”). I do not get into the details of this as that requires a more thorough investigation. The ability to activate to nf_tables loggers at the same time seems to go beyond the scope of nf_tables itself. The last 3 weeks of August have been reserved in the timeline to investigate the feasibility of this sub-task, and implement it, if feasible.
PODIO is a C++ library that allows the creation of event data models and efficient I/O code for HEP experiments. It does so by avoiding deep-object hierarchies and virtual inheritance. On the other hand, HDF5 is a data format that allows one to manage extremely large and complex data collections. Due to its versatility as a data model and rich set of performance features, it is an ideal format to store PODIO data in. The aim of this project is to implement an HDF5 backend for PODIO.
In this proposal, I would like to contribute new functionalities to the ArduPilot codebase to better utilize VIO tracking camera data for accurate localization and navigation, hence freeing up resources for the companion computer to perform other high-level tasks, as well as documentation with step-by-step hardware and software integration procedure for real-life experiments, so that anybody can follow and even more amazing applications can be developed in the future.
Dunner is a container based task runner tool built in Golang using Docker’s client library. A user can define multiple tasks, each task with some sequential steps which run on separate containers. Various features have been recently introduced to extend the scalability and usability of Dunner. The advantage of container-based task runner is to reduce host system dependencies and security. Also, containers are more light-weight than virtual machines.
Augur is fully functional prototyping web stack for CHAOSS metrics. It provides structured data mined from git repositories using a plugin architecture that incorporates other open source metrics projects like Facade and FOSSology. The main aim of this project is to extend Augur’s functionality by implementing Risk and Growth-Maturity-Decline CHAOSS metrics and use cases with a focus on the open source community manager use case. This project, with a focus on the community manager use case, will allow open source community managers to leverage Risk and Growth-Maturity-Decline metrics to better manage their communities and projects.
RoboComp’s existing simulator, RCIS, is based on OpenSceneGraph technology and custom made actuators and sensor. This project is to build prototypes of robotics simulation using V-REP and use its APIs to connect them to RoboComp ecosystem. Specifically, the project consists of implementing RoboComp omnirobot, joint motor, laser, RGBD interfaces and create a model of other RoboComp’s robots in V-REP.
Add tooling to the Guix package manager for configuring and building local "copies" of remote systems and transferring those copies to their respective target machines.
Currently Omega uses external filters to extract text from a wide range of file formats. However, it is possible to improve it replacing this filter with libraries and using multi-threading and resource constraints to isolate library bugs in a subprocess in order to avoid them from crashing omindex.
New non-destructive privilege escalation exploits likerunc container escape (CVE-2019-5736) and Dirty sock (CVE-2019-7304) and Dirty cow (CVE-2016-5195) will be added to Infection Monkey . Functionality to do privilege escalation and a basic post exploit recon will be added if I manage to implement those in time.
Neural networks are a powerful tool in machine learning. An integral part of any network is its architecture. However, structuring and comparing new architectures is non-trivial. We describe a toolkit for structuring neural networks that leverages a special class Naperian functors and functions between them. The explored class admits a canonical traversable structure which allows common neural network constructs, and shows the literal correspondence between higher-order functions and some architectures. We also show our toolkit is compatible with monadic computations necessary for training and evaluation. Our methods allow for expressive, unified, and elegant constructions.
ArviZ is a Python package for exploratory analysis of Bayesian models, from diagnostics to visualization. It is designed as a backend-agnostic tool with the goal to reach the widest user base and thus contribute to extend best practices among Bayesian inference practitioners.
Two key problems in this field are model comparison and convergence analysis. Model comparison is not trivial because of the different structures and number of parameters of each model. Fortunately, there are some information criteria (i.e. leave-one-out cross-validation) that can be used for this task. Even though convergence is proven for infinite iterations, it is not the case for finite MCMC runs, which can be arbitrarily bad. Convergence assessment must take into account both intra- and inter-chain correlations.
ArviZ implements many of these algorithms for diagnostic and comparison, at least at a preliminary level, but it still lacks plots and tools to ease and improve its interpretation. This project seeks to design and implement these tools. Moreover, it will pay special attention to testing and documentation with examples not only of the new functionalities, but also of the already implemented ones.
Because of the realities of community networks it is often difficult to find problems (known or not) in the computers of the network, and even more complicated to keep a record of them.
One way to know what is happening in the teams (and discover possible problems) is to look at their logs. To do this for the entire network it is necessary first to reduce the amount because the space that would occupy all the logs (and the number of lines to analyze) increases proportionally with the amount of router in the network.
This GSoC wants to develop a system that allows us to unify the logs of all the routes of the network, filtering them and normalizing them.
Candis (a portmanteau of Cancer and Discover) is an Open Source data mining suite (released under the GNU General Public License v3 ) for DNA microarrays that consist of a wide collection of tools, right from Data Extraction to Model Deployment. It has an RIA( rich internet application) and a CLI(command line interface) to carry out research. My main focus will be on expanding Candis’ machine learning services and tools to include deep learning, in particular by incorporating python based library Keras and taking the app as close to production as I can.
Sepsis is a potentially life-threatening condition caused by the body's response to an infection. The body normally releases chemicals into the bloodstream to fight an infection. Sepsis occurs when the body's response to these chemicals is out of balance, triggering changes that can damage multiple organ systems. Our main goal here is to train a deep learning model in python using all of its symptoms for the prediction of early onset of sepsis. Depending upon the values fed into the application, a doctor should get a good idea whether a person is susceptible to sepsis and get an early alert which can be critical for diagnosis. The application should be able to make these predictions using only a minimal set of streaming physiological data in real-time. During the course of this project, new deep learning methods, using temporal convolutional neural networks or quasi RNN, a model will be developed to identify markers that predict the onset of sepsis in patients admitted to the intensive care unit. We shall develop this application using the eICU database.
Creating APIs should be as easy as writing a “Hello World” program. Therefore, the Google APIs team is working on different tools to accomplish that goal. One of those tools is gnostic which converts JSON or YAML OpenAPI descriptions to equivalent Protocol Buffer specifications. This project idea aims at creating a plugin for gnostic that generates a Protocol Buffer specification that can then be used to generate a gRPC API.
Detecting changes in statistical properties of a time series is important in a large number of fields. A large amount of research has taken place considering changes in mean and variance in time series. However, a typical assumption is that the error process is independent. Similarly more and more users of existing packages, for example changepoint, are facing problems using real world data due to the dependence structures present. There are several methods available in the literature that are not available in open source code. This project plans to address this by adding this functionality to a popular CRAN package, changepoint.
In this GSoC project, I choose to employ the language model of transformer with attention mechanism to automatically discover query templates for the neural question-answering knowledge-based model. My ultimate goal is to train the attention-based NSpM model on DBpedia with its evaluation against the QALD benchmark.
My proposal aims to create a new component that allows to store the representation of the robot’s world over time, as well as query it. For this, it is necessary to store the graph structure used to represent the knowledge of the world that the robot possesses. In RoboComp, when robots need to perform complex tasks, its behaviour is based on missions. The mission begin from an initial state and through different transitions reaches a final state, that is, the robot completes the mission. The states are represented by a graph. My proposal is to use a JSON structure to store this graph. This allows its storage as a collection of documents in MongoDB. Consequently, each document could represent a state of the robot. Also, this slot is intended to study the viability of Neo4J for this purpose, as far as I am concerned, the best option would be to use the polyglot persistence, that consists in to use the database best suited for the type of data, in our case MongoDB, and in to use the database best suited for the type of queries, in our case Neo4j. Specifically I think that Neo4j Doc Manager could be our best choice.
WebDriver is a remote control interface that enables introspection and control of user agents. Currently, Servo supports only a minimal subset of the WebDriver protocol. The goal of this project is to extend this support and pass the conformance tests to demonstrate the implementation's correctness. Additionally, the project also aims to support running WebDriver-based automated tests like the WebBluetooth test suite, which require complex browser control and automated interactions that WebDriver provides.
De-novo sequence assembly is the process of constructing a contiguous long sequence out of shorter sub-sequences produced by sequencing platforms, without referring to a reference genome. It is an essential task in many biological studies today, including population and medical studies. The initial stages of de-novo assembly require the construction of a de-bruijn graph (DBG) from sequencing reads, the compression of a de-bruijn graph into a unitig graph, and the compression of multiple unitigs and nodes into contigs, supported by evidence from mapping paired-end reads. A coherent ecosystem of computational tools and packages allow researchers to quickly implement and test their ideas. For bioinformatics, Julia already offers such an ecosystem in the form of the BioJulia & EcoJulia projects, and additional independent packages. This project will add sequence assembly tools to the BioJulia ecosystem, specifically: 1) DBG construction from reads, 2) UG construction from a DBG and 3) Constructing contigs using unitigs. These tools will allow researchers to quickly construct and analyze the contigs obtained from a set of reads.
This project will focus on improving the existing UI of EvalAI to improve the experience of both challenge organizers and participants. Beyond this, the project intends to improve the discoverability of all the features that are supported on EvalAI. The goal of the project is to ease the pipeline for challenge creation, enhancing the user experience of the platform, adding plots for displaying the progress of state-of-the-art algorithms, for displaying the progress of participant team in a challenge over the years and several other features.
The UI testing in LibreOffice is based on introspection code in c++ interacting with a testing framework in python through a simple UNO interface. To identify objects we use the ids that we introduced for loading dialogs from UI files. During this Project the existing work should be extended and simplified.The project aims to implement new Domain Specific language to be used in UI testing by generate the python code needed for the python UI framework which will make testing easier. Also, The project aim to improve the logger that logs all the user interaction to be logged in the new DSL syntax to be more readable. Then we can use this replaying all the user interactions as a UI test. The project involved working with the UI elements and the UI testing framework of LibreOffice.
One of the robots supported by OpenRoberta is the Lego EV3, and it supports it in two main ways which both require a custom firmware. The problem with this approach is that the process of running a program include many steps and takes a lot of time. A simpler approach is needed, since a faster solution could increase the usage of OpenRoberta with EV3, especially in competitions.
The goal of my project is to enable OpenRoberta to generate UF2 files, which will then be downloaded by the user and copied over the EV3 running the stock firmware. The user will then be able to start the program as if it were a program created with the official Lego Mindstorm IDE.
This GSoC project essentially replaces the existing infrastructure of seven available exercises on drone programming with ROS, PX4 in Gazebo and mavROS. The resultant exercises shall allow users the possibility of directly porting their code onto real drones running PX4 with mavROS. These exercises shall also be integrated into the Jupyter framework for running them from the web release of the Robotics Academy framework.
DIRAC is a highly-scalable software used for accessing distributed resources from various distributed systems. DIRAC’s main contributor is LHCb and also its initiator. LHCb uses the different type of computing technologies in order to distribute and process the collected physics data and DIRAC is one the software which is used because of its scalability and the level of orchestration and monitoring it provides for the distributed resources which is the main requirement of the LHCb collaboration. Further, my task will be to upgrade DIRAC’s monitoring system further by assuring high-scalability as when the LHCb gets upgraded there can be an unpredictable type of data which needs to be molded easily by DIRAC for which we will use ElasticSearch which is one of the widely used NoSQL technology.
The Robinson–Schensted–Knuth (RSK) correspondence is a combinatorial bijection between a matrix and an ordered pair of equal shaped Young tableaux and plays an important role in representation theory. SageMath already has an implementation of RSK correspondence with some other insertion rules namely Edelman-Greene insertion, and Hecke insertion implemented as functions and conditional branches to the main RSK algorithm which is not extensible as the rules are currently defined in the RSK function and introducing any new rule requires it to be added to the RSK function itself. As combinatorialists are using RSK and its generalizations in their research, as well as continuing to develop even more variations, having these implemented in SageMath will increase SageMath's utility.
The goal of this project is the development of a Greek open source Morphological dictionary and integration of it into Greek spelling tools. Data will be extracted automatically from Greek Wiktionary.
CodeWorld is a web-based educational programming environment using Haskell. The goal of the proposed project is to implement a source expression inspection feature to CodeWorld. This feature will allow users to observe inputs and outputs of subexpressions of their running programs. The inspection UI may also allow users to manipulate function inputs directly.
To gain access to source subexpressions, I will implement a GHC source plugin that exposes subexpressions as additional top-level symbols in the original module. While initially tailored to CodeWorld’s needs, this plugin may also prove useful for inspecting and debugging other Haskell programs, possibly after any necessary extensions.
Gem-web is a tool that capable of providing an interface that allows opening documentation, source code and website of a ruby gem. Integrating it on RubyGems CLI would make this feature available out of the box to all ruby users, improving their productivity, since there would have no need to search this information manually.
Firefox DevTools offers good tooling for monitoring HTTP traffic between the current page and the server. The existing Network panel allows intercepting and inspecting all data transferred over the wire including headers, posted data, responses, detailed timings, etc. Unfortunately, Firefox DevTools doesn’t offer a way to inspect WebSocket (WS) communication in Firefox Quantum.
This project aims at providing support for WebSocket monitoring and inspection in Firefox DevTools. The feature should be built on top of the existing Network panel user interface (UI) and be responsible for visualising data (i.e. WS frames) sent through a WebSocket connection. Users should be able to perform common tasks such as pausing/resuming monitoring, clearing frames, searching/filtering, looking at summary data etc. Light and Dark Themes should be supported.
As one of the stretch goals, support for popular protocols such as Socket.IO, SockJS, plain JSON, WAMP, MQTT is planned for implementation.
This project aims in developing an interactive interface where users can test and explore Rocket.Chat’s Rest APIs. It will ensure that, the whole docs is consistent with the RocketChat’s endpoints in the source code. Alongside provide easy to use documentation and the ability for users to connect, alter, and manipulate parameters to fully experiment and understand how Rocket.Chat’s APIs can be fully utilized at various endpoints with speed regardless of the number of articles.
QGIS 3D is a great feature that has been introduced in QGIS 3.0 in 2018. It is still missing some features that are needed or can be helpful for the user though. In this project, I will work on 4 improvements to fill that missing features:
Software Heritage can be accessed through a beautiful and rich Web UI, developed in Django. Since the web portal is most attached to an end user, it needs to be the best. I have plans to make the web UI even better, ensure there are minimal bugs and vulnerabilities.
The proposal includes a list of deliverables and a timeline of work. Deliverables in the proposal are inspired from the wiki pages with some inputs from my side. This includes changes in front end, back end, testing and documentation.
By Kalpit Kothari (kalpitk)
Brief Explanation
Qt is an open source cross platform framework facilitating GUI applications development, for mobile, desktop and embedded devices. Although the framework is written in C++, it brings with it a meta-language (or modelling language), QML. To accelerate UI development, QML provides the Qt Quick Controls module with ready made widget types, each supported by a C++ class, like Button or Switch, ready to be styled and modified at our project needs. The module is currently on version 2.4 but there is no support for Calendar in the latest version, to be more specific, the Calendar was lastly provided in version 1.4 of the Qt Quick Controls module that was released with the Qt 5.3 version.
Expected Results
The Qt Calendar widget is updated, modified accordingly and ported into Qt 5.12 and Qt Quick Controls 2 current version following the QtQC2 module standards and supporting all features like styling. The Calendar, ideally would have a Template type where from properties like background and/or contentItem can be set for style customisation & switching support. Also, it should be possible to be instantiated as a standalone QML type and be styled locally for regular usage.
Enhance UI of Open Event Frontend and make its public pages more like Open Event Wsgen. Implement payment gateways like Google Wallet, Sofort Pay, Alipay. An accounts page has to be implemented where user can see his payment history. Make form builder similar to Google for Speaker data, Attendee data.
Implement an up-to-date SDK and Emulator installer, and hence improve its functionalities
Implement and solve all issues on the Android SDK Updater.
Identify, test and find solutions to existing issues with the Android Emulator across platforms.
Re-structure the Android Mode Core libraries and work on Kotlin conversion of the same.
To create a graphical library possibly using IUP extensible by Lua to allow easy creation of software like a schematic editor/flowchart creator/mind maps/block diagrams in Lua. The library needs to create basic mechanisms and graphic checks to create custom blocks and interconnections and provide an API to use these to create complex interactions like hierarchical schematic editors, etc.
High Level Trigger 1(HLT1) is the first and critical stage in software reconstruction of collisions at the LHCb experiment in the Large Hadron Collider at CERN. Allen aims to do full software reconstruction on GPUs.
However the reconstruction must also be able to run on the LHCb baseline x86 architecture. Since Allen's algorithm are designed to be efficient on SIMD architectures, a natural translation to support x86 architectures is possible. The SPMD programming model bears resemblance to the SIMT programming model of CUDA, and is a natural target for code translation. Such an automated conversion would be beneficial not only for Allen, but for any CUDA projects seeking cross-architecture support.
Exploiting vectorization on modern CPU is a hard task . Manual vectorization requires lot of developer time and maintenance can be hard. ISPC raises abstraction of SIMD via the SPMD programming model. Since SIMT programming model on GPUs closely resemble SIMD, the code conversion can be automated using an intermediate translation engine. The project aims to convert from CUDA to ISPC using translation engine based on LLVM.
Currently the stats module lacks some useful features as suggested on the ideas page, like Random Walks, Random Matrices, etc. There is also a scope of completing the work of GSoC 2018 by Akash Vaish like, exporting expressions of random variables to external libraries. There are also certain issues with results being produced by various functions and methods, like, sometimes unwanted complex results are produced for simple probability expressions or, unevaluated results are given as output. I would divide my project into the following broad categories:
Employing machine learning techniques to incorporate an ORB desciptor (w/ auto-stitching, live video support) in order to enable better pattern matching along with some major LDI UI (and API) enhancements.
This project aims to increase the usability and easiness for all the open-source enthusiasts by enhancing and improving the coala Community website as well as coala Projects website for newcomers, developers as well as for other communities. A community website plays an essential role for an open-source community because it spreads the idea of offerings that community provide to other organizations, with information about the active community by exploring their skills as well as their precious contributions to it. So, the major focus of this project proposal is to enhance the architecture of websites. In addition, new API Endpoints will be created in coala webservices. These new endpoints will manage the user-database as well as coala database in a more structured form.
This project deals with extending DRAKVUF for fuzzing the operating system using hypervisor and libinjector in DRAKVUF. libinjector will be used to make random calls to the guest OS running inside the VM and resulting behaviour reported back to fuzzer using libvmi
Develop a system able to trace and profile the Falco Engine performance. Firstly it is necessary to monitor and document the existing performance constraints of Falco, then by using these informations, the goal is to improve the performance by relaxing the impact of the discovered bottlenecks, performing an optimization of the Falco engine. Finally provide an analysis of the performance improvements and compare the obtained result to the initial one.
To convert the Androphsy(OpenMF) backend to python
This project aims at providing network-wide Ad-blocking and user-bandwidth regulation capabilities in Amahi HDA. This would prevent a client from being served with any advertisements whether browser-based or in-app. This would also eradicate the need for installing any ad-blockers on every device separately within an HDA local network. The user-bandwidth regulation feature would enable the owner of the network to control the bandwidth of each individual device connected to his network using a friendly web-based interface. This could be helpful in limiting the bandwidth usage by the guests or for parental-control.
Improve support for NetBSD kernel fuzzing in Syzkaller kernel fuzzer and add support for fuzzing other kernel subsystems.
This proposal will develop object-class methods to transform on-disk Flatbuffer data (row oriented) into Apache Arrow format (col-oriented) in memory, then work toward extending our row-oriented filters and aggregations to column-oriented versions using the Arrow APIs.
Digital Negative (DNG) image format decoding support for the FFmpeg project. DNG is an open lossless/raw image format meant to standardize and replace proprietary, custom camera-specific RAW formats.
GNU social is a communication software used in federated social networks. This project will: • Improvements on the Federated Requests Queue (which will be shared between OStatus and ActivityPub) • Improvements of the OEmbed plugin • Improvements on the Image Systems • Temporary posts • Removing posts with no engagement • Implementation of a circuit breaker
A standalone HTML+JavaScript based library through which CSV data can be uploaded and charts can be plotted from it's columns. Other features like exporting CSV as Google Sheet, publish chart as a research note, save chart as an image and more will be built into the library.
Upgrade Field operations app from version 5 to version 6
There are numerous MPI implementations in the Gentoo portage tree and far more in general, but most of them can't be installed together as is. The only way to do this now is by using empi from the science overlay and empi has its shortcomings such as lack of multilib support and was not ported to the main tree. In my proposed plan, a new MPI framework ported from existing MPI applications is introduced to enable Gentoo to use multiple MPI versions and relevant packages depending on them (for example, HPL). The new MPI framework is mainly based on ‘Modules’ package together with Gentoo Prefix technique for detecting, choosing or manipulating MPI environment. Also, a new implementation of mpi.eclass is introduced and the packages can still be emerged normally onto the system without any trace of mpi.eclass being used. The approach also enables users to easily work with different implementations at the same time without any special privileges. In order to solve the defect that empi does not support multilib, the new framework also adds support for allowing the environment to be configured and installed into different places among different architectures (amd64, x86 etc.).
This is a GUI client for the fedora QA team for rating updates.
The Magnetic Lasso was lost during the port from Qt3 to Qt4, this project tries to continue the working to port the tool to the current version of Krita
As one of the main languages for scientific computing,
Julia's users expect it to be fast.
One often needs to perform simple arithmetic computations,
such as adding or subtracting two numbers.
In many cases, these computations rely on
the standard implementations of the arithmetic operators.
These implementations might not always be optimal:
summing the n elements of a vector of affine expressions
would have O(n^2) complexity if one proceeds in this naive manner,
since temporary expressions need to be allocated at each step.
However, using mutable expressions,
this complexity can be brought down to O(n).
The proposed project will involve
creating a new package: MutableArithmetics.jl.
This package will allow
On top of this, benchmarks will be written to demonstrate the improvement in runtime.
Proposal describes epipolar geometry tools that I would like to implement. The algorithms that I would like to implement: KAZE feature detector and descriptor (or some other, if unavailable for open-source), fundamental matrix calculation, drawing epipolar lines and image rectification. Those will lay foundations for 3D scene reconstruction, motion tracking (tracking an object in a video) and other lightweight computer vision tools.
I am Oleksandr Samoilov. I am a student of Dnipro National University. I have experience developing various MVC applications on Zend Framework, RESTful applications on Symfony, also experience in working in high load projects.
I find the idea of “Webservices in Joomla” interesting for me. I think I can participate in the development of Joomla 4, it will be a very important experience for me and I want to benefit Joomla community.
I have installed Joomla 4, phpcs and configure xdebug.
I have already completed the task(https://issues.joomla.org/tracker/joomla-cms/19123). Here is my the pull request: https://github.com/joomla/joomla-cms/pull/24266
Currently, the submission worker that evaluates the challenge requires manual scaling. For auto-scaling, I'll be migrating it to AWS Fargate from EC2. The goal of this project is to write a robust test suite for submission worker, port it to AWS Fargate to setup auto-scaling and logging. The tasks will also include giving control to challenge hosts over the submission worker from the UI in terms of starting, stopping and restarting it, and automate the container jobs based on some conditions. Among other aspiring deliverables is to implement a live logging feature from the submission worker.
hydrus(Hydra universal server) is a python based tool to create hypermedia-driven REST-APIs. It uses Hydra(W3C draft) standard for creation and documentation of its APIs. This proposal aims to improve hydrus and extend its functionality. As we want to make hydrus reliable and complete, we will need to at least provide all the basic functionalities a Hydra-compliant API Documentation can describe. As we are planning to create a testbed in hydra-python-agent to test whether a server is Hydra compatible or not, it makes sense to first make hydrus compatible with Hydra so we can test it. So besides extending the functionality of hydrus, this project includes refactoring hydrus and making it completely compatible with Hydra specification.
The pymc project is an open source collaboration that focuses on providing Bayesian modeling as well as probabilistic machine learning. The current approach most of the API utilizes makes use of Markov chain Monto Carlo. In particular this project has been used successfully in research ranging from psychology to climate science. Currently the pymc group is transitioning to TensorFlow as a back end over the previously used theano back end present in pymc3. On top of the back end changing there is also an effort to include symbolic computation as a means to bring more functionality to the table. This creates new hurdles to overcome such as converting pymc4 models to their corresponding symbolic- pymc meta objects and adding more functionality to the symbolic-pymc package. This project is aimed at accomplishing the following tasks: (1) Setup conversion of pymc4 models to symbolic-pymc meta objects. (2) Improve symbolic-pymc codebase. (3) Setup Gibbs Sampling.
This project aims to develop a Jupyter notebook plugin which deploys Spark required services to a kubernetes cluster on OpenStack cloud at CERN.
Kubernetes provides scaling when the traffic or computation increases by launching a Spark driver pod in the cluster which in turn creates multiple Spark executer pods which executes the application code.
The services that will be attached to the Kubernetes cluster are CERN CVMFS, Spark shuffle service, and Spark history server. These services are needed for running Spark on Kubernetes. Physicists can then use Spark running in the background to perform scalable interactive data analysis and visualization.
Also, a proper UI will be provided inside the Jupyter notebook so that a user can attach various services to the cluster. This plugin then will be integrated with SWAN notebook service which CERN provides.
SVG Translation was one of the top wishes in the 2017 Community Wishlist. This tool makes it easier to translate SVG files for users who have no experience in doing so. This tool could also be integrated within the interface of the Content Translation tool, so that users have it even easier to translate the file's labels. This project will focus on this integration, working hand to hand with the developers of the Content Translator, the SVG Translation tool and the community.
Molecular simulations are predominantly ran under periodic boundary conditions, i.e., upon leaving one face of the simulation volume, you re-enter in the opposite face. This can lead to molecules being split over the periodic boundary, which requires rectification before performing calculations. This project would involve defining wrapping and unwrapping functionality in the various AtomGroup methods which are based upon the position of particles, e.g., center of mass. Due to performance considerations, the functionality of these methods will also require translation into Cython.
Bazel on Linux already supports various ways to run actions in a sandbox (linux-sandbox, processwrapper-sandbox). We want to support sandboxing on Windows by reusing the DetoursService from BuildXL (Microsoft Build Accelerator).
Related issue: https://github.com/bazelbuild/bazel/issues/5136
OpenIoE is an Open-source middleware platform for building, managing, and integrating connected products with the Internet of Everything. It enables you to subscribe to data streams and get data from the sensors and store them. The application was generated using JHipster application generator.
The proposed project is the implementation of Preview Links within the Polari IRC.
People often share links during conversation and a number of times the link happens to simply refer to an image. Currently, in these cases switching to the web browser is the only way of viewing the content.
Considering the increasing number of people that are using Polari this important feature is definitely going to benefit both Polari application’s growth and the current community.
This project proposes to integrate Cell Tracking Feature as an extension to the Active Segmentation Application developed in ImageJ by INCF Belgian Node. Cell tracking is done by Optical Flow Tracking using customized tracking points.
A new type of brush for Krita, that will use an SVG file as a source for a Pipe/Animated Brush
Performance testing is to analyze results and determine where the app performance can be improved and also improve users experience
The project aims at making the logged out pages more attractive by improving the visual design, adding illustrations and making a consistent design across all pages. Furthermore, CSS refactoring to SCSS nesting will be done. In the end, a presentation video of the Zulip system will be done, which is a nice to have on the website.
The project will include adding new approaches to already existing triangulated mesh simplification framework of CGAL.
The current approach depends on a method developed by Turk & Lindstrom, which is based on edge collapsing. The first suggested new approach is based on the algorithm developed by Garland & Heckbert. It uses vertex collapsing instead of edge collapsing as the primitive operation. In their method, they collapse vertices that share an edge or that are close to each other. In each iteration, a pair of vertices get collapsed resulting in a new vertex. They choose the pair that will be collapsed based on an error metric expressed as a quadratic function. They choose the pair that minimizes the error metric. A parallel version of Garland & Heckbert will also be implemented.
After implementing this approach in CGAL, CGAL’s triangulated mesh simplification framework will be compared to other libraries in terms of runtime and memory usage. (OpenMesh, MeshLab, PCL)
This cycle will be repeated for other mesh simplification methods.
A framework to implement real time software on Adapteva Parallella platform
Sample platform is CCExtractor’s platform that manages a test suite bot, sample uploads, running regression tests and much more. The purpose of the platform is to provide all the functionalities related to managing the CCExtractor’s repository; including but not limited to email notifications for new issues and pull requests, taking in samples from users (over FTP), testing each pull request and much more.
It should be easy to install, setup and modify in case we ever need to. It is crucial to have version control systems for database in place and hence we need to finalize database migration manager for being future ready for schema changes.
Another (and the most) important factor are the tests, and at the moment there is very little customisation that can be done. The tests need to have improved comparison, better methods to provide output, improved concurrent and parallel testing and more. A lot of other changes need to be implemented which are mentioned in details in the proposal.
Apache Commons is an Apache project focused on all aspects of reusable Java components. One of its components is “Commons-Math” which is a library of lightweight, self-contained mathematics and statistics components addressing the most common problems not available in the Java programming language or Commons Lang. The old source of these statistical functions, Commons-Math, grew too large, hierarchically complex and interdependent for the commons mission. It also had unsolved design issues. This led to the need to create a separate statistics component The aim of the project is to develop a new component, “Commons-Statistics”. Which will have all the statistical functions defined in “Commons-Math” component’s package org.apache.commmons.math4.stat, by implementing Java’s latest features like Streaming, Mapping, Functional Interfaces ,Lambda Functions etc. And develop it using Functional Programming syntax.
The current process for setting up a new integration can be quite cumbersome and for some integrations (such as those requiring the user to install and set up Hubot first - like the YouTube integration) the process may even be described as daunting. This means that even though Zulip has a plethora of beautiful integrations, the users won’t really be seeing the full potential of them because of the initial “work barrier” required to setting them up. Thus why not make this process much more enjoyable for the user and less of a pain? Further the bots-to-resources permissions system can and should be improved (based on granting permissions to clusters of resources).
Main Features:
1. URL Creator UI
2. Dashboard-ize the /integrations Page (One-Click Integrations)
3. Overhaul the Current Bots Permissions System
R is slow compared to other popular languages. “The R interpreter is not fast and execution of large amounts of R code can be unacceptably slow”. This is because “R was purposely designed to make data analysis and statistics easier for you to do. It was not designed to make life easier for your computer”. Although there are several R interpreters that attempt to improve execution speed, “switching interpreters is something to consider carefully”.
“Beyond performance limitations due to design and implementation, it has to be said that a lot of R code is slow simply because it’s poorly written. Few R users have any formal training in programming or software development. This means that it’s relatively easy to make most R code much faster”. “A good deal of work is going into making R more efficient. Much of this work consists of reimplementing interpreted R code”.
The main goal of this project is to provide an R package with functions that allow users to automatically apply strategies to optimize their R code. The developed functions will have as input and output R code so that the resulting code will allow the user to understand what modifications in the code cause its optimization.
memory_order_consume gets automatically promoted to memory_order_acquire due to the difficulties faced by the compiler while tracing dependencies at the C/C++ source-code level. and because of memory_order_acquire unnecessary memory instructions come in which might not be desired. This proposal is about the implementation of memory_order_consume only by marking pointers which are carrying dependencies, not the other objects.
This project aims to add JSONField and ArrayField that can be used for all database backends supported by Django.
Volumetric viewing of scientific phenomena is an important feature in modern scientific visualizing tools because it can express specific phenomena which can’t be visualized using the default vertex based geometric renderers/rasterizers; as electron clouds and Van der Waals interactions, as well it’s been proposed in Molecular Graphics journal in 1989; so now is a proper time to integrate it into 3Dmol.js rendering system as It has already gained the level of maturity when volume rendering becomes an essential milestone. During my work I will investigate different approaches of visualizing volumes, select the techniques that are modern, efficient and fit the best to the existing code.
A redefined application developed on flutter containing various tools for controlling the LG screen. Touch gesture based navigation to control the LG screen. Tours, POIs and guide modules to assist the user with common usages. Enhanced connection settings with automatic trouble shooting procedures. This application will be a perfect tool for a naive user to control the LG screen. Ability to draw elements like paths, polygons, etc directly over an imitation of LG screen on the controller app and a direct-sharing add-on will provide additional ease in usability. It will also provide an easy interface for using the LG screen in educational and commercial purposes.
A tool to manage Prosody's community modules. It allows to install, remove, update and list plugins. A mercurial repository is developed in order to allow the previous features to operate.
Carbon footprint is a project which provides the users with information related to emission of Carbon, Ethane and Nitrous Oxide when they perform some activities such as using various appliances, using electricity, travelling through flights, trains or other vehicles. Carbon footprint Alexa is an Amazon Alexa skill which provides an interactive and conversational interface to the users. The user can ask any query related to Carbon emission to this skill and it can be operated through Android devices, iPhone or any of the Amazon Echo devices. Carbon footprint is a great step towards saving our environment as if a person is aware of how his activities are affecting the environment, only then he will be able to take some necessary steps to reduce the harm to our environment.
Currently, a full Xcode installation is needed to install MacPorts. In reality this is overkill, as most of the packages on the MacPort do not need the full installation. To phase out this dependency, several things should be done:
This will lay the groundwork to eventually phase out Xcode dependency for MacPorts as a whole. All contributions are made onto the macports-base repository, touching various areas of the codebase and overall adding features to the Tcl-based command line tool.
NMatrix is being re-implemented by SciRuby contributors here at https://github.com/prasunanand/nmatrix_reloaded. This re-implementation is having a fast core written using C-API with Ruby front-end. With this project, we aim to create a faster NMatrix which will replace the original NMatrix later on and the source would be simpler and easier to read and improve. This proposal is to add more features to this newer NMatrix such as adding support for LAPACK and BLAS routines, sparse matrix operations, indexing and broadcasting, matrix decomposition etc.
To adapt the TriforceAFL kernel syscall fuzzer to effectively catch and report issues in the NetBSD kernel for amd64. TriforceAFL is a modified version of AFL that supports fuzzing using QEMU's full system emulation. Triforce-for-NetBSD will be a syscall fuzzer built on top of TriforceAFL. Fuzz the NetBSD kernel and report bugs that were found. Create a pkgsrc package with TriforceAFL-for-NetBSD.
Infrastructure management is an important part of every project as we are always faced with the need to make regular releases of new features and applying consistent testing methods while doing so. The aim of this project is to expand the code coverage of the unit and integration tests for the Submitty project. This project will not just focus on introducing new tests to increase code coverage but will also focus on the best ways to implement, automate and execute the suites. In short, this GSoC project is about the Continuous Integration (CI) of Submitty projects mainly in PHP and Python where the majority of the tests are written and would be concerned mainly with writing tests (testing), automating the building and deployment of Submitty.
Impement a WebSocket layer to hs-web3 and use this WebSocket Functionality to upgrade the current web3 modules and implement an IPFS-Api module using existing IPFS API services.
The aim of this project is to create an online interactive guide for digital logic design. The primary goal is to develop an open source book with quality content which teaches digital logic design. It will enable students to learn digital design by interacting with circuits,truth table and other interactive elements as they proceed through the book.The professors and students all over the world can read and contribute to the same.
Move the OpenMRS Atlas code to use LDAP, get the OpenMRS module running again, and support upload/download of marker images!
This project invloves applying clang thread safety analyser feature to the Linux kernel source code to prevent concurrency related bugs and report potential race conditions when source code is suitably marked with the available clang annotations. The analyis is completely static(ie., complie time) and there is no run-time overhead involved in the interprocedural analysis.
Stabilizing the Processing video library by repackaging GStreamer 1.x for a leaner video release and addressing some additional clean-up to reach a finalized release of the 2.0 beta.
Documentation, essential for any software project, needs to be as easy as possible to write and publish. For Julia packages, you can use the Documenter package, which automatically generates and publishes package manuals as a web page or a PDF.
It is important for Documenter to be modern and flexible. This project aims to revitalize Documenter, by upgrading the generated HTML front end and making it easier for documentation authors to customize. This will make sure that Documenter will keep meeting the needs of the community for years to come.
Draft proposal for Attachments Module Enhancements
As a student, I use graphs in almost everything related to my master’s research. Graph Theory is a great mathematical tool for programmers and mathematicians, as it facilitates some computational problems resolution and modelling. However, is not always easy to understand the underlying theory and algorithms.
Rocs (https://kde.org/applications/education/rocs) is a Graph Theory IDE created in January of 2010, with new updates still being committed almost every month, currently in the KDE repositories, designed for lecturers, students and researchers. It is composed of a visual data structure and a powerful scripting engine in Javascript. Though useful for creating simple graphs, it is still lacking the tools to simplify the creation of bigger graphs, and also of basic graph theory algorithms for beginners. Another improvement it should have is a step by step execution system, in order to fully comprehend the algorithms and facilitate debugging.
I believe improving JS polyfills is an important project becasue it can offer a major part of the use base features they need and, maybe, influence the standardization in the actual SVG working draft. I have already started documenting the polyfills, as shown in the proposal task list. If we get a way of polyfilling Inkscape-specific functionality, I think it's the start for different mesh-gradients development, animations and other 'imported' functionality from other graphics software. Both are mentored by Tavmjong Bah.
This Project intends to add the PRU support to RTEMS, using the Beaglebone Black (BBB). The BBB has an Texas Instruments AM3358 SoC with an Programmable Real-Time Unit (PRU). The PRU is able to connect to the SoCs i/o within one cycle. This will enable the RTEMS community to develop heavily i/o dependent tasks on the Texas Instruments SoCs with PRUs.
P5 Math In Motion would be a library that renders interactive math notation inside p5.js projects with the help of Katex, an open-source library for rendering math notation on the web.
The PPI (Poverty Probability Index) is a powerful tool to measure poverty index and also a yard stick to measure the progress and impact of poverty outreach programs. To reduce the amount of human intervention in surveying for more accurate and quicker results, I have submitted a proposal in which I plan to make an Android application which will use Machine Learning, Computer Vision through the Google Cloud Vision API and Tensorflow library to perform image analysis and aid the surveying process. The app will render the relevant PPI using the Apache-Fineract SPM API. Further, images may be uploaded to the Cloud Vision Models or fed to the embedded Tensorflow models which are made available using Firebase ML Kit to obtain a report of various objects detected and inferences derived relevant to PPI questionnaire.
MuseScore functionality can be extended by plugins. Discovering the compatible plugins and installing them however is currently a manual job. You have to use a web browser to choose and download the correct plugin version from a plugin page, extract it using your file archiver, copy qml files(and sometimes with translation files) to MuseScore plugin directory and finally reload the plugins in MuseScore.
The whole process described above is possible to be fully automated, which is exactly the goal of this project.
The project consists of 2 parts:
The project hopes to refactor and add Cycles/EEVEE nodes to improve the workflow of technical/shading artists. This is done by providing; more streamline nodes for vector math and numerical computations, more noise types, and wider access to Blender’s data like object properties and spline info.
The projects hopes to achieve the following:
The aim of the project is to allow users to upload the data in form of files(.csv, data table, google sheets, etc.) to BETYdb. It will done by creating an interface which will provide logical workflow to guide the user through the process of uploading to BETYdb. Crawler are used to increase the knowledge base.
smbcmp is a simple CLI tool that uses Wireshark CLI version (tshark) to dump and diff traces. It currently uses the plain text output format of tshark but tshark also has a proper XML output. The goal of this project would be to use or combine current tshark output with the XML output to do better and deeper diffs (ignoring indentation differences, adding ways to let users add ignore rules, etc).
The SIMPLE Grid project is an extension of the SIMPLE Framework that combines popular configuration management technologies such as Puppet/Ansible and container orchestration technologies such as Docker Swarm/Kubernetes to allow deployment of complex computing clusters using a single site level configuration file. The proposed project aims to improve functionality, correctness, and efficiency of different stages of SIMPLE Grid Framework.
Spidermon is a monitoring tool created to help Scrapy users. It helps by creating monitors to be checked when Scrapy spiders run, like tests, and perform actions to notify Scrapy users that the spider could have any problem during run. Today an user can be notified via email and Slack, but anyone can create actions to be performed if you have an integration. Today, Spidermon uses manual configuration to setup projects, which is good for experienced users, but it’s not helpful for whom wants to have a quick setup. Also, Scrapy already have this feature, together with a good documentation. My idea for this project is to create a robust CLI for Spidermon setup. A user would be able to enable and configure Spidermon monitors, actions and item validations, and be presented with a detailed help menu upon request.
Numba is a JIT(just in time) compiler that compiles a given section of the code(specified by the user) at a given time instead of compiling the entire code at once. The aim of the project is to identify the possible bottlenecks in the code in order to speed up the overall code execution by applying numba project wide . Another part of this project is to vectorize the functions(specifically for stats and diagnostics) using xarray ufuncs to broadcast across xarrays.
This project is about migrating skeleton animation, i.e. actors, so they can be loaded into Ignition Gazebo. The work will involve making sure that Collada skins and animations, as well as BVH animations, are loaded and animated correctly using SDFormat, Ignition Rendering and Ignition Common.
A huge volume of data is generated every night by large astronomical telescopes around the world. A robust and scalable software infrastructure is necessary to be able to leverage such high volume and high velocity of data. Fink is an Apache Spark based broker infrastructure to receive, process and redistribute such high-velocity astronomical data obtained from telescopes in real-time such as the LSST. The aim of this project is to develop an Alert Redistribution System for Fink using the state-of-the-art technologies of Big Data processing and distribution (Apache Spark, Apache Avro and Apache Kafka). This Alert Redistribution system will help scientific users to access real-time data and carry out follow up research at their ends.
File managers are important utilities provided by graphical operating systems to aid users in managing their files. ReactOS has its own file explorer with most basic features that users would expect. However, one essential feature it lacks is the ability to search for files and folders. The goal of this project is to add search functionality that allows users to easily locate the files and folders they are interested in based on name, type, and other file attributes.
The project aims to extend the functionalities of the existing AppStore and linking it with the bake build system, so as to support more functionality to the build system and make it easier for developers to install extensions. The work involved is as follows:
A major effort in empirical asset pricing research is the initial stage of gathering the data, cleaning and filtering it, and then formatting it in a way that simplifies further statistical analysis. This process, when done properly, takes a large portion of a researcher’s time when it would be better spent on doing the actual analysis. By developing a package that automates much of the data import, cleaning, filtering and standardization process, a substantial fraction of the researcher’s time will be saved, while automating a significant portion of the data gathering and management aspect of asset pricing research. We expect this last aspect to support the reproducibility of research by academics and financial professionals. Our over-arching goal is to make the EAPR package an ideal support tool for the wide range of asset pricing research as described in Ball, Engle, and Murray (2016), and for quantitative portfolio construction research. The initial version of the package will work very effectively with asset prices, returns and factors (exposures) data delivered by Wharton Research Data Services, a major source of empirical data for academic asset pricing researchers.
KDE Partition Manager runs all the authentication or authorization protocols over KAuth (KDE Authentication), which is a tier 2 library from KDE Frameworks. In the current implementation of KDE Partition Manager, all the privileged tasks such as executing some external program like btrfs, sfdisk etc. Or copying a block of data from one partition to the other, which requires escalated permissions to execute are executed by a helper non GUI application. So, instead of running whole GUI application (KDE Partition Manager) as root or superuser, a helper non GUI application is spawned which runs as root and executes privileged tasks. This helper program communicates with KDE Partition Manager over simple DBus protocol. The current implementation may seem a good idea, but is not, the reason being that KAuth is an extra layer added over Polkit-qt which causes extra overhead. So, the proposal for this project is to port all the authentication/authorization code from KAuth to Polkit-qt without effecting the original behaviour of KDE Partition Manager.
MoveIt currently only implements discrete collision checking for movements of the controlled robot. A major drawback of discrete collision checking methods is that they may miss collisions between the sampled time steps. While there exists techniques to alleviate this problem, resulting algorithms can be relatively slow. To provide stronger guarantees, continuous collision detection (CCD) techniques have been proposed by the research community. They compute the first time of contact between two moving objects along a path.
The new planning framework Tesseract of ROS-Industrial has implemented CCD utilizing the Bullet library. The aim of this project is porting the feature from Tesseract to MoveIt. As Tesseract draws its heritage back to MoveIt, both motion planning frameworks share similarities which makes porting feasible.
Besides CCD other improvements should be adapted:
Besides API changes, the project includes writing tests and tutorials showing the new capabilities.
What does the titler tool do? : The titler tool is used to create clips containing text and images, which can be composited over videos.
The problem: The titler tool renders XML using an MLT module which uses QGraphicsView. QGraphicsView is considered deprecated. Moreover, adding features to the current back end is difficult as it is known to be buggy.
The solution: Rewrite the tool's backend using QML and QQuickRenderControl by implementing a new QML MLT module which can render QML, and to also create a basic titler which can render basic QML templates.
Why QML? : QML, in general, allows creating powerful animations easily and it is much more flexible which means more possible features to the titler are possible.
In this project I would like to focus on implementing the following things:
pgAdmin's Query Tool currently works in 2 different modes:
The aim of this project is to automatically detect if the query entered will produce in an updatable resultset and enable/disable editing of the results and other parts of the UI as appropriate (for example, disabling the sort/filter UI options if the query string is not one that can be programmatically changed).
This project aims to quantify the output sensitivity to the parameters i.e. their relevance to network output and introduce a regularization term that gradually lowers the absolute value of sub-sensitive parameters. Thus a very large fraction of parameters approach zero and are eventually set to zero by simple thresholding. This method surpasses most of the recent techniques in terms of sparsity and error rates.The major takeaways are reducing the computational complexity, building an adaptive sparsifying framework for any network and experimenting it on RL networks.
With ccNetViz, this large graph render library using WebGL, I'll add edge animations on it. Including color animations, speed control and other fancy animations like wavy curve, dotted line and liquid fluid effects.
These animations will be totally customizable and we'll also offer some high quality presets for user to choose.
Finally, documentation for edge animation will be complete and we'll also have a demo page to show all function of edge animation.
Implements a native module to Share extension to iOS and Android OS and create a Bridge Module to use this on React Native.
The project aims to deliver a responsive web-based analytics dashboard, integrated to an in-house catalogue of circRNAs identified from multiple species with the functionalities like searching on the basis of genome coordinates and host gene names, a visual representation of inferred circRNA structure, comparing isoforms across different samples that would be linked to Ensembl genome browser.
This project describes the implementation of a radiology report workflow; the process of making a radiology report after claiming a study, till the point of report approval (where a report has been approved to be completed).
This project aims at alignment and testing of ns-3 implementation of TCP with Linux kernel. Through rigorous testing various features of TCP like ECN, RACK, SACK, Paced Chirping etc. will be aligned and a proper documentation of the differences will be prepared.
For this project I wish to develop OCR for television news in a tri-phased implementation model. The first phase will be consist of successfully detecting text regions from videos. For this I intend to use OpenCv to detect specific features that are unique to each of the mentioned languages. EX( Detecting repeated occurrences of a horizontal line to detect text regions for Hindi and Bengali. Detecting strips having two or three colours of high contrast) The next phase would involve developing an algorithm for eliminating any duplicate texts detected. Ex: In a breaking news video. The title breaking news and the news headlines will have to appear in several frames and will thus have to be filtered to remove redundancies. The third phase would involve the actual Optical Character Recognition script using Long Short Term Memory (LSTM) implementation of RNNs.
When using Gcompris the difficulty in an activity can increase too much and target different ages in the same levels. The aim of this project is to add a granularity on the datasets so the child can better target what to learn. this project will involve updating activities to be able to use json files as datasets so we can have multiple datasets (each targeting a different learning goal) for the same activity.
Particles are made to collide at very high energies at the LHC in CERN,Geneva. These collisions typically generate sub-atomic particles( electrons, neutrinos etc), highly unstable, exist only for a fractional time before further decays. We detect these particles by their energy deposits, recorded as signals. Detectors have many layers and applying an electromagnetic field separate the charged particle from uncharged ones, it's possible to follow the detector energy deposits (hits) to map out a trajectory for each particle created in an event. The lifetime of each such particles is typically in ranges of microsecs (or lesser), and a huge outburst of particle hits in all directions have to be instantaneously detected. Mapping out particle trajectories is a task in itself because as their paths are non-linear (parabolic, helical and so on) mapping them between detector layers isn't very simple because the layers have gaps. Help comes from the fact that, such particles have high momenta and hence the particles trajectories don't change randomly and rapidly. Improving pattern recognition algorithms to map the trajectories of particles in a straight line is the goal of this project.
FOSDEM team will be able to use OSEM to handle the submission, evaluation, and acceptance of requests for stands for its event in minimum possible steps. This will save a lot of work and time. Organizers can invite new users to join the app, as well as, they can invite them with a specific role (organizer, info desk, etc). This implies that as soon as the conference planning starts, organizers can invite team members through invitations. Stand submitters will also be able to invite new users for the role of stands organizer. The team will be able to contact the right group instead of an individual through the emails in a single step.
SPDX is an open standard for conveying components, licenses and copyrights information of software in a human-and-machine readable, unambiguous way. For SPDX to do that, SPDX community has developed some collaterals such as the SPDX specification, programing languages tools, among others. As part of the programing languages tools, there is a Python tool that allows its users to write and read SPDX documents represented in two formats: RDF/XML and tag/value. This project consists in extending the format support to include JSON, XML and YAML formats. The eventual wide range of formats to interchange SPDX documents will make easy and painless their adoption due to they would fit more and more development communities habits and guidelines. This will help to spread the standard, leading SPDX to reach its goals.
KIOSlaves are a powerful feature within the KIO framework, allowing KIO-aware applications such as Dolphin to interact with services out of the local filesystem over URLs such as fish:// and gdrive:/. However, KIO-unaware applications are unable to interact seamlessly with KIO Slaves. For example, editing a file in gdrive:/ in LibreOffice will not save changes to your Google Drive. One potential solution is to make use of FUSE, which is an interface provided by the Linux kernel, which allows userspace processes to provide a filesystem which can be mounted and accessed by regular applications. KIOFuse is a project by Fabian Vogt that allows the possibility to mount KIO filesystems in the local system; therefore exposing them to POSIX-compliant applications such as Firefox and LibreOffice. This project intends to polish KIOFuse such that it is ready to be a KDE project.
Iodide & Pyodide enables one to do data science computations entirely in the browser. These tools explore a rather unique trade-off space, allowing scientists to work flexibly and communicate effectively rather than switching mediums for different tasks. The notebook style interface allows the best in class tools of both JavaScript and Python to be used coherently.
The matplotlib library that is currently shipped for Pyodide uses the Agg backend for actual rendering. Writing a new backend based on the APIs of HTML5 Canvas element would enable us to reduce size & memory footprint of the final build while also resulting in significant speedups. In fact, it could also enable us to take advantage of GPU acceleration along with giving the ability to use locally installed fonts along with web fonts.
The project aims to analyze a satellite set of telemetry to understand links and dependencies among different subsystems. The project should be able to demonstrate an understanding of the links between the different behaviour changes of each telemetry within a satellite or within a set of external sources of information (mission plan, solar aspect angles, ephemerides, etc.) in order to rapidly characterize future debris events to support risk analysis, close approach analysis, collision avoidance maneuvering, forensic analysis and other decision making. Machine learning can be used to learn the different link models and storage of acquired knowledge should be stored in a graph (Bayesian network). The intermediate and final output should be represented as data interpretable by a visualization interfaces, preferably in JSON.
Fuzzing has been a very useful technique to find bugs and vulnerabilities. Fuzzing operating systems however has been problematic when the operating system is also responsible for keeping the system running. Using a hypervisor to work around this limitation seems to be an obvious solution. This project would explore using and integrating existing tools to achieve this: DRAKVUF's libinjector combined with AFL to fuzz operating systems
My project is based on the fact that the offline component to be created for EBO gives it the ability to speak through a TTS modulating the voice depending on their mood and also this contains a dictionary of phrases created by the child and thus offer the possibility that the robot has a wide range of phrases with which the child feels identified and that he does not have the obligation to specify a phrase whenever he wants to speak.
Build a GLR parser-generator as an alternative to the current chunking system to better support long-distance phrasal reordering.
The KM3NeT software component that I will work with is called JPP, and it is depends on the ROOT framework. The JPP software framework is used to reconstruct signals from neutrinos. The aim of the project is to optimize a particular algorithm in the JPP software by using modern CPU features (like Vector Instructions ). It is proposed to investigate opportunities for improvements by introducing vector instructions and optimization of data structures. Also, the performances will be benchmarked.
This project is focused on tracking aircrafts with ADS-B messages. The aim is to make a smooth tracking method when the aircraft fails to report its position to the ground station with multi-level positioning and data fusion. There are a lot areas on Earth which are still not covered by radar to track aerial activities such has huge ocean masses. This project will lay down a framework in better tracking and monitoring of aircrafts.
Simulating quantum optics can provide vital insights into understanding how light and matter react with each other and has a variety of applications, such as in quantum computing architectures that use photons as qubits. In general quickly simulating the dynamics of a quantum optical system becomes expensive as the system grows, due to having to solve a large number of differential equations. The goal of this project would be to explore ways to make these simulations more tractable, through both concise application of theory and efficient implementation.
One of the most important steps in tuning an emulator performance is to identify which are the hot regions and to measure their translation quality. However, in QEMU, there is no easy way to identify hot TBs. Therefore, we propose in this project the enhance of QEMU log system to add these capabilities. Our plan is to add three new capabilities to QEMU: (1) to profile and list hot emulated blocks, (2) to calculate global and per block translation quality statistics, and (3) to allow all these inspections to be done in an interactive mode in the monitor tool.
Text analysis is a fascinating field that attracts people from many scientific fields. As of today, text analysis is executed through powerful scripts, scattered to many packages and libraries of different programming languages, each of which contains only a subset of available linguistic features and requires high technical knowledge to be operated. As a result, it is difficult to obtain a unified result, with all the desired features and many people, who lack a computer science background, are excluded from using them. The goal of the project is to build an online web GUI, that provides its users with an easy way to extract quantitive text profiles from multilingual texts. The text analysis will come from scripts that combine many existing NLP packages, written in many programming languages, like R's udpipe or Python's spaCy. The tool is going to be modular and open source, in order to be easily accessible and adaptable to everyone's needs. The project is going to boost and facilitate scientific research in NLP fields, as it will make text analysis available to people with little to none computer science knowledge (e.g. linguistics students).
The Plasma desktop does not have a hard dependency on any login manager, but the SDDM (Simple Desktop Display Manager) login manager is the recommended option. SDDM used in conjunction with Plasma, however, results in certain issues, some of which fall under the guise of consistency between the login screen and the desktop. In practice this means that as soon as there is any relevant veering from Plasma defaults, the login manager no longer provides an identical visual experience. This GSoC project would try to solve that very issue by adding the possibility of syncing the desktop and login manager options. Options that could be synced are: color scheme, font, font size, font rendering, icon theme (obtained from the Plasma theme). Rather than adding discrete options, the suggestion is to have only one option that would pertain to syncing SDDM settings with a particular user’s settings. Patching SDDM to support Plasma wallpapers would also fall within the scope of the project. Most importantly, because desktop display scaling preferences are likewise not respected in SDDM, the project would also tackle the issue of allowing users to set a display scaling preference via GUI.
Building an API to stage the results of Static Application Security Testing (SAST) tools.
Due to uncleanness of the camera, sometimes dusty spots appear in the captured image. Those dusty spots can be fixed by cloning similar regions from the image and healing the bad region. This project aims to implement this healing brush tool on an arbitrary image. The tool should also allow for zooming-in the image to apply delicate healing to small regions, and should also have variable brush sizes.
Building a library that enables conversion of Tensorflow deep neural network models to corresponding Spiking Neural Network architectures, with minimal performance losses.
Presently InterMine uses Struts framework which is outdated. InterMine provides RESTful web-services which facilitates to execute custom or templated queries, search keywords, manage lists, discover metadata, perform enrichment statistics and manage user profiles. The main objective of this project is to migrate the web-services from Struts to Spring framework and document the APIs with Swagger in compliance with OpenAPI Specifications.
In the Debugger tab of the Firefox DevTools when the debugger is paused on a breakpoint users can see the values of variables by hovering over them. This project will add inline values beside the variable name so that the user can get a quick overview of all the relevant variables easily without having to hover over each one of them.
Metal Renegades originally started as an idea on the MovingBlocks forum for a brand new gameplay module for use the Terasology engine. The new game would be an open-world sandbox, where the player is dropped into a futuristic western world populated by robots. The core gameplay would be driven by faction conflict, building mechanical systems, advanced NPC interaction, resource gathering, and much more. This project takes some of the first steps to start bringing the ideas mentioned in the post to life. Specifically, this project focuses on the underlying world of Metal Renegades, and the interactions of AI agents that populate it. This world provides a strong backbone to build the core gameplay elements upon.
The aim of this project is to add support for additional delivery channels to Moira. A delivery channel is used to alert the user based on Graphite data using triggers. Additional delivery channels that are proposed are popular on-call incident management tools like PagerDuty, VictorOps, Opsgenie, team chat apps like Discord and voice and SMS API providers like Nexmo.
MapKnitter is based around the upload of images, the positioning of those images on a map, and the compositing of those images into map export formats. This project idea focuses on the systems for tracking changes on those images, collecting them into sets, storing image history, and other improvements which we hope will simplify and reconfigure the MapKnitter codebase.
Mapknitter Synchronous Editing is a long-sought feature of MapKnitter is the ability to collaborate in real time on image upload and placement as if it were Google Docs. This will involve changes from the MapKnitter codebase to the Leaflet.DistortableImage front-end image distortion UI.
Nvim works as both a server and a client. “Nvim client” can connect to any other “Nvim server” and Nvim GUIs can show the screen of a remote “Nvim server”. But the built in Nvim TUI cannot show the screen of a remote Nvim server. The goal of this project is to bring this functionality to Neovim.
The aim of the project can be summarized as: Whenever the user starts Nvim as
nvim --servername a.b.c.d
the Nvim will connect to the Nvim server running at a.b.c.d. The Nvim client instance would send input to the remote Nvim server and reflects the UI of the remote Nvim server. So the Nvim client acts like any other external GUI.
WebSim is a web-based robot simulator. It is used as a tool for learning robot programming. WebSim is currently programmable through JavaScript. This project aims to support programming the simulator in Python3 by building a transpiler that targets JavaScript.
DigiKam has powerful tool to auto detect and recognize faces on images. This feature is important for the end users, as it can save a lot of time. However, for now there are some issues with workflow and GUI that can be improved to make this tool even better. In my paper, I sum up suggestions from the digiKam users and from my own user experience to propose a plan of work.
Admin panel is the view of the software that interact with the user and enable overall control on the devices .It fetches data from multiple sources and present in an interactive way which reduces the complexity of dealing with multiple devices. Such views are built to convincingly stimulate how a user would interact while managing devices in real life. The admin panel will be an attempt to reduce the complexity that a user faces while managing devices connected to the senz server.
Linear Programming (LP) based sparse learning methods, such as the Dantzig selector (for linear regression), sparse quantile regression, sparse support vector machines, have been widely used in machine learning for high dimensional data analysis. Despite of their popularity, their software implementations in R are quite limited. Our GSOC project aims to develop a new R package -- PaRametric sImplex Method for spArse Learning (PRIMAL), for the aforementioned LP-based sparse learning methods with the following two key features: 1) It provides a highly efficient optimization engine based on the parametric simplex method, which can efficiently solve large scale sparse learning problems; 2) Besides the estimation procedures, it provides additional functional modules such as data-dependent model selection and model visualization.
This proposes a new action system which will allow bears to define their own actions, hence make bears more useful. Implementation of such a system makes providing support for bears which can suggest multiple patches, which is a part of Improve Diff Handling project, easy.
The aim of this project is to add an interactive music feature to Godot, with similar functionalities to Wwise. Wwise itself cannot be used however, because it’s a proprietary software and functionality cannot be offered. Instead a module with two new classes will be created that offer similar functionalities, AudioStreamTransitioner and AudioStreamPlaylist. They will inherit AudioStream and can be used in AudioStreamPlayers.
p5.touchgui is an easy-to-use GUI library that enables beginners and advanced users alike to quickly iterate on ideas with UI elements that work with both mouse and multi-touch input. Students, artists, and designers will be able to easily and flexibly add buttons and other UI elements to their p5.js sketches without needing to use DOM elements or write custom, complicated code. The library will be contained within a single file and offer a selection of buttons, sliders, toggles, and joysticks with various styling options.
Proposed UI elements:
Build a Docker based task runner in Golang.
Clearly Defined is a collection metadata of licences, copyrights and the source codes. Clearly Defined clarifies data on open source components, and mostly focus over the open source licence, source location and attribution parties. This helps in providing more information about what their obligations are and feel more confident in meeting them.
The project aim is to connect the Clearly Defined REST API and query through the metadata provided by Clearly Defined. This will help in importing the data and reducing the manual clearing task for users. Moreover, the FOSSology can lead to contribute to clearly defined by providing new license metadata and helping the community.
Git-issue is a minimalist issue management system based on git. It strives to be simple to use, decentralized, and in line with existing Unix software philosophy and design.
Currently, git-issue can import data from GitHub's issue management interface, but not export them. This is limiting in the following ways:
git-issue has to compete with ubiquitous issue management solutions such as GitHub/Gitlab, instead of synergizing with themThus, adding export capabilities would complete the round-trip integration between git-issue and GitHub, and pave the way to wider adoption. The same argument can be made with GitLab, for which neither export nor import capability is currently supported. This proposal aims to add import and import/export functionality for GitHub and GitLab respectively.
This proposal aims to implement 'Collections of Projects' feature in Read the Docs which will allow users to have one or more collections of different versions of same or different projects.
The project is about adding some new features to Alga library. This project is on the list of ideas.
Briefly about goals:
I finished up with this draft of definition for algebraic acyclic graphs:
data AcyclicGraph a = Empty
| Vertex a
| Overlay (AcyclicGraph a) (AcyclicGraph a)
| Connect [a] (AcyclicGraph a)
| Shift (AcyclicGraph a)
deriving (Eq, Show)
Here, the vertices of graph are splitted into "levels" and equal vertices from different levels are considered different. In my proposal, I show how one can deal with such definition and do some basic dynamic programming. I also made up instances for basic data classes, from Functor to MonadPlus.
Nothing special about that, I discussed common graph algortihms for weigted graphs, choosed which of them should be implemented and even drafted implementations of some of them.
I started with discussion of algorithms for network flows and continued with special algorithms for bipartite graphs. Many of them are easily derived from other algorihtms' implementations.
Etherpad allows you to edit documents collaboratively in real-time, much like a live multiplayer editor that runs in your browser. Write articles, press releases, etc. together with your friends, fellow students or colleagues, all working on the same node at the same time. All instances provide access to all data through a well-documented API and support import/export to many major data exchange formats. And there are tons of plugins that allow you to customize your instance to suit your needs.
“Etherpad Integration” module allows the Drupal site administrators to extend the functionality of Etherpad in Drupal. Currently, this module is only available to Drupal 6 & 7. Since, as per the stats the Drupal 8 is used by more than 25% of Drupal sites. Hence we need to port it for Drupal 8.
This project consists of the development of the following:-
Bitcoin blockchain is a huge data structure with 180+GB in size. Due to this huge size, available Bitcoin parsers take several hours to parse the entire blockchain. As an example, BlockSci parser takes 11 hours with an 8GB cache. Because of that most of the available Bitcoin parsers are inefficient on memory constrained devices. The goal of this project is to design and implement a Bitcoin parser(may support forks of Bitcoin as well) which can use available memory efficiently to reduce blockchain parsing time on memory constrained devices.
An effort to decouple D from the C standard library.
ListenBrainz has recently shifted its statistics infrastructure from Google BigQuery to Apache Spark. Apart from delivering statistics/graphs to end user, using Spark’s cluster computing, its MLlib can be effectively exploited to build an open-source music recommendation system to promote artists across the globe and not only a selected few promoted by the major labels.
This project is to add Microarchitectural enhancement of Ariane that is a popular open-source CPU core implementing the RISC-V ISA (instruction set architecture). Currently, the processor is single-issue, meaning that the processor can only issue one instruction per clock cycle. That is a huge performance bottleneck since most functional units in the processor will stay idle when it does not have any instructions to proceed. In this project, I will implement super-scalar issue logic which allows Ariane to issue two (or more) instructions in the same clock cycle so that the overall performance will be greatly improved.
This project is about adding a very practical functionality to the IRISpy package which is built on top of Sunpy’s ndcube package. IRISpy provides functionality for the analysis of observations from NASA's IRIS satellite which looks at UV emission from the solar chromosphere. The proposed new features will give scientists far greater power and ability to perform IRIS data analysis in Python as well as to make new discoveries regarding the energetics and dynamics of the solar chromosphere and transition region than previously could.
For every deployment of Rocket.chat, there is a team or community that generates a lot of content. Rocket.chat itself is the repository for much of that content but the rest typically resides in other applications such as github or jira or google docs, many of which are third party services and so the content is not owned by the community that hosts the Rocket.chat server. In one particular scenario, a user may want to write an article and “publish” it to groups or contacts within Rocket.chat. Today that would have to be done typically using a third party service. What if instead article creation, searching, and storage were integrated with Rocket.chat such that each user could maintain his or her own library of articles and contacts of that user could subscribe to see new articles posted by that user and to discuss those articles using something like Rocket.chat Threads.
TensorMap will be a web platform to play with the machine and deep learning algorithms, where a user will define a flow of algorithm using simple drag, drop and click functionality along with real-time learning output in the form of graphs.
There has been a recent surge in the development of open-source computational methods for simulating human evolution and analyzing human genome data. These provide many new opportunities for genomic research, but the integration of these different resources is currently poor. In particular, turning models of human history into evolutionary models is notoriously time-consuming and bug-prone, and it requires knowledge of the specifics of each simulation tool. The major goals of this project are to develop a library of widely used historical models that integrate across multiple simulation tools, and to develop more robust and user-friendly model specification tools to automate the workflow of genomic and evolutionary analyses.
Currently, SymPy has a well-established series package in sympy/series.py. The goal of this project would be to improve the existing series package, and introduce other optimizable functionality and algorithms in the module. I would like to unify the various existing series expansions, and plan to give it a concrete structure for further development and improvement. The aim of this project would be to -:
1.) Implement more operations and improve the existing Formal Power Series methods and classes.
2.) Ring implementation of Formal Power, Laurent and Puiseux Series to make rs_series() the ultimate series function.
3.) Improving and optimizing existing limit algorithms.
4.) Unifying all the existing series expansions under one class base.
The goal of this project is benchmarking tardis to optimize its performance. Air speed velocity (ASV) is a useful application to test the performance of tardis after each change, such as adding analytics of the microphysics. Relative benchmarks of performance, such as time processing, will be measured using airspeed velocity. After each addition to the tardis project, we can then measure the relative performance compared to previous benchmarks.
Developing an idiomatic D library for persistent/immutable data structures
The following modules are to be integrated with caMicroscope:
The motto of this project is to make the recent research in deep learning more accessible
Implement a color management protocol in weston which allows…
ODK 2 Push Notification enables supervisors to send notification to field workers with the help of skunkworks-parrot (Desktop Application) and workers receive notification in a Android application named ‘skunkworks-bat’. It facilitates larger organisation to manage their working groups efficiently. First aim of the project is to stabilize and test both the applications so that they can be fully released and integrated into the ODK 2 tool suite. Second major improvement which I propose is to add interactive notification along with some minor improvements.
bugbug was started with the aim of making the task of bug tracking simpler, by using machine learning algorithms to differentiate between bugs. However, it is currently in it's early stages. It uses basic feature extractors and a Gradient Boosting Tree classifier to make predictions. With this project, I plan on enhancing the accuracies and capabilities of bugbug, by making the classifiers more robust , implementing deep learning models and additional feature extractors for these models. I hope this project is a solid move towards making the tasks related to bug tracking more convenient.
For this GSoC project I propose to expand rover's sailboat functionality, allowing it to move from 'that's cool' to something that can do useful work. I hope once this project is complete rover based sailboats will be the ideal tool for long endurance, long range missions on large pieces of water, be it for mapping large areas or taking measurements at specific locations. The new code will result in a robust controller capable of moving efficiently from A to B in a wide range of wind speeds and sea states.
The project will be focused on development of extension for neo4j graph database for querying knowledge graphs storing molecular and chemical information. That would be implemented on top of neo4j-java-driver.
Task is to enable identification of entry points into the graph via exact/substructure/similarity searches (UC1). UC2 is closely related to UC1, but here the intention is to use chemical structures as limiting conditions in graph traversals originating from different entry points. Both use cases rely on the same integration of RDkit and Neo4j and will only differ in their CYPHER statements.
A collection of small to medium length tasks which will improve the Haskell editor tooling experience, as a continuation of my work on GSOC18 and HSOC17.
CLocal Azure will be an easy to use emulation engine for the users of azure cloud services. In this project the users will be able to test/mock the cloud applications locally before running on the azure platform by reducing the cost. The project currently supports azure functions, azure storage and azure cosmosdb services.
In this proposal, we aim to implement the detection and analysis of the Kafka traffic in Linkerd so that users can watch their Kafka service in a better way. For this purpose, we will write a Kafka codec and integrate it with Linkerd, design metrics for Kafka service, and then create corresponding charts and forms on the web dashboard.
Machine Translation is one of the most essential tasks of Natural Language Processing where a lot of research has been done. There are various metrics that are available to evaluate the quality of translation but most are obtained by computing the similarity between an MT hypothesis and a reference translation based on character N-grams or word N-grams. Using the data generated from the annotation of the TED Talk in two or more languages, as well as the relations between frames given in the Berkeley FrameNet Data Release 1.7, our task for this project is to build an automated metric system for machine translation, which is intended to measure the frame distance between sentence pairs in two languages. In this proposal, the task has been modelled as a regression task of predicting accuracy scores. The frames evoked and the related frame network has been used as pivotal features of the model.
Right now, MIT App Inventor is short of components that allow to visualize data using charts. This project aims to implement entirely new components that allow displaying the data visually, such as through pie charts, bar charts, scatter charts or radar charts, while also providing various customization functionality to allow the user to adapt the components to their needs.
DifferentialEquations.jl is the state of the art differential equations solver that exists right now. Being written purely in Julia and having a huge number of algorithms - this package provides both complete and easy interface for users. Being said that, to maintain this state of the art performance, we need to keep updating the code and always need to have a right check in terms of performance. This project aims to do the same. The project aims to - reduce the memory consumption of the solvers, increase the speed (step optimizations), increased number of options for optional optimizations and improved benchmarking techniques.
In Pharo almost every project uses SUnit and because of that working in enhancing SUnit and provide a user interface is valuable for everyone.
DrTests project aims to provide a plugin-based UI to deal with tests in Pharo. It will provide the same features as the actual SUnit UI (i.e., running tests, profiling tests and computing code-coverage) but will allow to plug additional analysis on unit tests. The project is still on work and the link is: https://github.com/juliendelplanque/DrTests/
Introduce a common Pharo's Sunit layer instead of different layer doing the same (for example, the SUnit UI, Jenkins tools and the system browser define 3 different ways to collect tests defined in a package).
The goal is:
An overview and web demonstrations of the steps involved in Canny edge detection.
Khipu is a good software but it's still on their beta version with some bugs and some things missing. Their main library, Analitza, also has some things to fix or to be made. That's my objective: make this software usable and move it to KDE Edu.
Health is one of the fundamental logic for any game. The proposal is to make the Health component, which takes care of the health of the player and other NPCs, more extensible by moving it to a separate module and adding more logic flow into it.
The aim of the proposal is to convert existing Grafana dashboard's static JSON files to Jsonnet files, which will help dynamically generate JSON as per requirement.
LibreCAD is a free Open Source CAD application for Windows, Apple and Linux. It allows industrial designers and graphics enthusiast to create CAD projects of the highest standard and precision. With this precision and standard comes the need of high quality and accelerated rendering to visualise a document. LibreCAD 3 was designed to have multiple rendering engines without major modifications to its core. Right now LibreCAD 3 uses Cairo for rendering. For users with high resolution screens, Cairo is not convenient because of a slowness, due to a bad integration with Qt and a missing caching system. At each frame when rendering is done the data is sent from CPU to GPU which is very inefficient. OpenGL has lower CPU overhead for draw calls and state changes and lets you take advantage of the GPU to render graphics on your device's screen and performance is excellent. The project consists of replacing Cairo with OpenGL in LibreCAD 3.To make a complete well abstracted OpenGL implementation with C++ for the rendering in LibreCAD.
This project aims at bringing native dataframes to D Programming Language. The main task would be to implement a dataframe which supports multi-indexing, column binary operations, grouping and data aggregation. Along with the dataframe will also come functionality to parse data from and write data to a CSV file without any hassle.
The present crash reporter that comes with Mozilla Firefox allows users to restore crashed session (in most cases) and send crash reports to Mozilla. Once the crash report is submitted, it also points users to a page to see the decomposed crash details but requires internet to do it. The purpose of this project is to extend the functionality of the available crash reporter to help user solve the problem (if possible) which caused a crash.
An attractive user Interface for Jekyll powered AIMA Exercises portal with added features of bookmarking questions and proposing questions. A new Question bank mode for all the exercises available with filters to sort according to popularity or difficulty level. Upgrading the commenting and discussion system from Disqus to an instance of Staticman. Automated Question addition section using Github. Fixing cross-references for various page and exercise references using tool tips and popups. Feature to extract selected exercises in the form of LaTeX, Pdf, Markdown. Implementing unit tests either as separate test-scripts or pre-commit hooks to ensure that the future changes in AIMA Exercises follow the right structure and cover essential things.
Thinking everyone got a message, but for some it didn't arrive – that is bad. This project will improve Matrix to better handle cases in which bridges couldn't deliver what they were asked for. It does so by adding new failure modes for bridges which can inform the user that something just isn't quite right.
The proposal aims to enhance functionalities of SUSI.AI android app. It would improve hot-word detection features in SUSI. Also it would support the use of wearables like smart watch and bluetooth earphones. It also aims to implement missing features like profile section, chat deletion, gestures support, etc.
This project aims to provide functionality to popper execution engine by creating external actions which can be paired with a popper to achieve various functionalities like uploading and publishing data-sets and articles using workflows and GitHub actions. Also, we will create example workflows for the showcasing of actions that we have created. The pipelines from v1.x will be ported to workflows for popper 2.0
Rooted in INCF's suggested Project Idea 1, I propose a Python client for the CBRAIN API. Anticipating asynchronous scientific computing workflows, our SDK would prioritize: (1) productivity in Jupyter Notebook, and (2) abstracting background tasks from the user. Ultimately, a Jupyter-ready tutorial would demonstrate ease of analysis in our SDK.
Tangibly, project output would prototype a pip-installable library documented on readthedocs. Applying our library, a demo script would automate a common CBRAIN use case: end-to-end execution of FreeSurfer's "recon-all" task (cortically reconstructing input (T1) MRI scans), and saving results to a target CBRAIN data provider.
The VLC interface is quite outdated on Linux and Windows. It has a lot of features, but some are not properly exposed.This project for the summer is to rework heavily this interface to make it beautiful and useful again.
Requirements: This project requires Qt/C++ knowledge, and qml would be a nice plus.
Proposed mentor: Pierre
[Taken from: https://wiki.videolan.org/SoC_2019 ]
The redesign of the player has already started and this project will further continue on polishing it and adding missing features. I aim to achieve three major tasks in this project.
This project’s objective is to add an SSH proxy for the Cowrie (https://www.cowrie.org/) honeypot. Currently, Cowrie emulates an SSH server using Python, and provides proxy functionality only for SSH exec commands, but no interactive terminal sessions. In this project we expect to implement the logic to forward SSH protocol messages from clients into backend SSH servers, thus allowing for full-fledged terminal sessions, and converting Cowrie into a high-interaction honeypot. Our proposal includes the addition of a backend management module, which will handle the virtual machines where commands received by the proxy are executed.
JavaScript is being widely used in PDF for all sort of things. Right now, Okular still fails to interpret and execute most of these JavaScript scripts. We're aiming to improve it by adding support to animations, text formatting and all kinds of copying by JavaScript.
Release-bot helps upstream maintainers deliver their software to users, via automated releases at GitHub and PyPI. However, release-bot can be smarter in many ways. The workflow of the bot would be much easier if we create Github app and get rid of some bothering configuration. In the current state release-bot work with Github releases, one of the possible improvements is to extend functionalities for more git forges as pagure.io which is used by many projects in Fedora world. Also, release-bot can work with a single project at one time, let’s change it.
This project involves designing the layout of IPTable rules using OPA's policy language Rego, implementing the algorithms that generate IPTables from that policy, and writing the code that populates the generated IPTables rules into Linux host.
LLVM includes binary utilities equivalent to GNU binutils. Basic functionalities are done but there are incomplete ones such as Mach-O support. This project aims to support those missing functionalities and improve usability for those who crave for an alternative to GNU binutils.
The goal of proposal is twofold:
A critical barrier to greater scale and reach of Mifos X Web App is the time to deploy and ease of onboarding new financial institutions. Mifos X Web App provides the frontend for a core banking system. A user struggles to independently get the system up and running in spite of having access to documentation or local support. There is need of a self guided walkthrough or configuration wizard to help financial institutions more quickly and independently setup and configure the system for the first time. This project strives to accomplish the same.
The aim of this project is to enrich Flux Model Zoo with unsupervised deep learning models, in particular variants of Generative Adversarial Networks. I propose to add the following models : 1) Spatial Transformer Networks 2) StarGAN for facial expression synthesis 3) VAE-GAN 4) Energy Based GAN 5) Gated Recurrent Convolutional Neural Network
To build tools for the high-speed synchronization of subtitles with the audiovisual content.
Tool A: Synchronization of subtitles between two versions of the same audiovisual content, using audio fingerprinting annotations.
Tool B: Synchronization of subtitles, using the burned-in subtitles of the base audiovisual content and a modified version of the base content, comparing the timing window (constructing two binary strings) of the modified audio and the subtitles.
OpenRoberta is a learning platform. It uses a graphical programming language (based on blockly) and has code generators/loader for many robots and embedded systems used in education (Extracted from the project site). For this project we’ll be doing a series of task regarding the graphical programming language and we’ll also be doing tutorials for how to use the language in order to put them in the tutorial section of the OpenRoberta Lab which is currently empty. As a result of this project, the following products are expected:
SHORTENED Today, data engineering infrastructure requires support for features not currently implemented in Apache’s regression library. Specifically, as an example, regression has massive applications within machine learning infrastructure (having enormous demand for both big data and effective support for non-linear regression - such as logistic regression). Improving Apache’s regression library would result in significant impact through gaining a wider audience on this growing influential field among many other applications; aiding solutions for data engineering challenges to support data science around the world. Consequently, this project aims to implement a robust starting foundation for the new Apache Commons Statistics regression library component, updating from the current limited “math-stat-regression” library to make use of Java 8’s features to improve efficiency, functionality and scalability. The other objectives are also to mitigate dependencies, encourage future appendage of new tools after the port (starting with logistic regression in this project) and make overall improvements such as enhancing the user interface and designing a more intuitive architecture.
Develop a system to implement a bidirectional data synchronizer and deliver fully integrated solution for Nextcloud in Plasma Mobile by syncing data from the Nextcloud server to device and vice versa, to enable users to add Nextcloud account and use the data shared on Nextcloud instance easily.
Open Babel is a widely used open source toolkit for cheminformatics. One of its important functions is 3D structure prediction of input molecules. Open Babel’s coordinate generation function was improved in speed and stereochemical accuracy by the new fragment-based coordinate generation method implemented as a GSoC 2018 project. However, the stereochemistry accuracy is not as high as the distance geometry method of RDKit.
In this project, I will improve the coordinate generation of Open Babel by implementing a new method which combines the fragment-based method of Open Babel and the distance geometry method of RDKit. This new implementation is expected to be faster than current RDKit and more accurate in stereochemistry than current Open Babel. Implementing a better prediction method could be beneficial for a wide range of applications such as drug design.
Currently, NodeCloud supports AWS, GCP, and Azure. This project aims at creating a common dashboard which will be linked to the portal, for checking resources on a web-based UI. This would make finding the resource and service status effortless and easy going. The task list majorly includes:
Re-building the official Tensorflow models to make it TF 2.0 compatible. The proposal proposes a holistic improvement to the models repository to upgrade models/research, models/tutorials, models/official and models/samples. This proposal also aims to build new features and improve problem solving approaches in Tensorflow models that will improve its research prototyping and promote researcher on-boarding.
The official models (including Transformers) will require an end-to-end upgrade with tf.data pipelines and eager execution with DistributionStrategies. Other improvements include bug fixes and the use of tf.GradientTape to compute the gradients more efficiently. The Variational Autoencoders and the GANs projects will be updated to tf.keras and eager as default
Starcross Android App brings the facility of Night Sky Gazing
NodeCloud is a standard library to get a single API on the open cloud with multiple providers. Making open cloud easily accessible and managed. The proposed system would be improving the existing feature as well as adding new features and cloud providers.
The project aims are to implement Risk metrics and other metrics within the Growth-Maturity-Decline CHAOSS metrics and use cases using Augur, focusing on what we have unearthed as the open source community manager use case.
p5.xr is a library p5.js that enables WebXR capabilities with p5 sketches. The goal of the library is to allow p5.js sketches to become multiplatform AR or VR projects with little added code. The capabilities of p5 will be greatly extended by this library and since it is in pre-alpha stage, it requires constant stabilization and testing while implementing new features.
I propose to work on the following :
I propose the development of a web application that will support the whole lifecycle of a thesis creation, namely, a Thesis Management System(TMS) that will benefit the students as well as the professors. This proposed system aims at eliminating the time consuming procedures and paper work at the very basic level by encapsulating and automating them with in the web app. MoreoverTMS will provide the functionality of an open source digital repository of completed theses, where the student authors can share their work to a wide audience and may be cited more easily by companies and researchers in their academic community.
The existing UI is not user friendly. We need to develop a more user friendly one using modern techniques.
Efficient adoption of multiple clouds is often hindered by unconsolidated control and observability. The Tungsten Fabric framework addresses this challenge using a software-defined networking stack, so as to provide a cloud-grade single network fabric capable of interacting with diverse multi-cloud environments in a secure and scalable manner. Following the increasing use of containers, one of Tungsten Fabric's use case involves integrating its virtual networking in a Kubernetes environment to provide a range of multi-tenant networking features. A significant problem with this use case is the lack of automation and documentation around installation and setup of Tungsten Fabric. This project aims to simplify the process of launching Tungsten Fabric in Kubernetes. To do so, we will be providing automation scripts as well as a consolidated and maintainable documentation of the whole process.
Agora is a library of data structures and algorithms for counting votes in elections. Agora-web, is a website to conduct online elections.The project will add Two factor Authentication as an extra layer of security for user's account. Also all the security concerns mentioned in https://civs.cs.cornell.edu/sec_priv.html will be implemented. This project will also rebuild Frontend for agora with good User Interface and Design. Also the new frontend will implement lazy loading and angular universal for SEO.
This project aims to build a mobile application for Submitty. The application should be able to run on both Android as well as iOS and provide elementary features regarding authorization, course and forum for instructors, teaching assistants and students.
Integrate QEMU with the OSS-Fuzz continuous fuzzing service. Implement functionality to fuzz QEMU’s devices adhering to the VirtIO standard. Fuzzing is a powerful technique for bug finding. QEMU’s implementation of VirtIO devices is a particularly appealing target for fuzzing due to their widespread use and VirtIO’s clear specification of the Guest-Device interface.
Issue-wanted is a web application focused on improving the open source Haskell community. It does so by centralizing and categorizing GitHub issues across many Haskell repositories into a single location. The current issue-wanted code base is a skeleton project with a list of desired features. Over the three month period, I will build the backend by implementing the GitHub API query functions, database schema, asynchronous worker, API endpoints, and the necessary tests. Many more decisions will be made throughout the process. This proposal covers the features and components that make up issue-wanted, and the different ways they may be implemented.
In this project, new Image processing algorithms will be introduced which will expand the scope of Boost.GIL and provide more functionality to the users. This would cover specific basic algorithms which can be used to develop other advanced image processing algorithms. One of the main topics includes the development of image processing kernel to serve as a base for other Image processing algorithms which are to be implemented in this project, or in the future.
This project consists of:
1.)Allowing the Signedness Type System to capture the signedness of boxed integral types, special Java wrappers for integral primitives and BigInteger class.
2.) Annotate and provide new methods with the JDK, Guava and Apache classes.
3.) Extend the refinement of Value Checker based on their value ranges to wrappers classes and expressions.
4.) Doing case studies on files and annotate the necessary changes.
The goal of this project is to adapt the Matterport implementation of the Object Detection algorithm Mask RCNN to 3D, in order to classify events produced by the detector HGCAL. This detector is used to detect collisions occurring in the Large Hadron Collider. Its output is then reconstructed as 3D images that will then be analysed by Mask RCNN to detect events.
Red Hen has news recordings of 100+ hours each day. They range across multiple countries and languages. This means that Red Hen can run NLP processing on a wide range of files. This project aims to create a multilingual pipeline which has further specifications for each language with the country and channel specific data. Several NLP tasks would be performed on the text data acquired in these languages. This pipeline would ultimately run inside singularity container on CWRU HPC.
gravitas aims to provide methods to operate on time in an automated way, to deconstruct it in many different ways. Deconstructions of time that respect the linear progression of time like days, weeks and months are defined as linear time granularities and those that accommodate for periodicities in time like hour of the day or day of the month are defined as circular granularities or calendar categorisations. Often visualising data across these circular granularities are a way to go when we want to explore periodicities, pattern or anomalies in the data. Also, because of the large volume of data in recent days, using probability distributions for display is a potentially useful approach.The project will provide these techniques into the tidy workflow, so that probability distributions can be examined in the range of graphics available in the ggplot2 package.
To implement the best methods for computing the diameter, radius and the eccentricities of the graphs, and maximum matching in bipartite graphs
This project aims at enhancing the Juice Shop application by drawing inspiration from modern e-commerce companies and incorporating sublime features that users could encounter in their everyday lives.
These features will enable the Juice Shop project to devise new challenges that could keep the users up-to-date with the latest exploits and vulnerabilities, and make their learning experience more realistic.
Implement necessary tools for rank metric McEliece cryptosystems. Namely finish work on rank metric and Gabidulin codes (tickets #21226 and #20970). Use these to create a McEliece cryptosystem class with encoding and decoding.
The OpenMRS organisation is an open source software platform that provides health facilities with the ability to customize their electronic medical records (EMR) system with no programming knowledge. The Common Lab Test OpenMRS add-on module enables to view and manage patients test result files. With it, users (with appropriate privileges) can upload, view and delete test result associated with a patient record. The Common Lab module brings to the OpenMRS Reference Application a central place to view & manage test files for patients and admins can create new test cases and edit patients test result Additionally, it encompasses Test files uploaded within OpenMRS.
SPDX provides a license list for commonly used open source license - the SPDX License List. SPDX also supports defining licenses within the SPDX document using a LicenseRef syntax defined in section 6 of the SPDX specification. In the next release of SPDX, we plan to introduce a mechanism for other organizations or individuals to maintain lists of licenses outside of the SPDX license list, but allow those licenses to be valid without requiring the text to be in the SPDX document itself. This enhancement has been documented in the SPDX specification issues list. This project automates the registration and management of the namespaces.
The goals of this project are to add capabilities for computing Delaunay Triangulations/Voronoi Diagrams and for generating random geometries to the Boost.Geometry library.
To reach the first goal, concepts for triangles and general meshes and an interface for a Delaunay Triangulation algorithm are to be designed and documented. The next steps will be providing models that conform to these concepts and implementing and testing a Triangulation at least for 2D cartesian coordinates and possibly for spherical coordinates.
To reach the second goal, a random geometry distribution concept will be defined and documented based on the existing concept for random number distributions found in Boost.Random. After that, uniform point distributions on basic geometries will be implemented.
This project aims at developing various image processing algorithms and manipulation routines to sunkit-image, an affiliated python package of Sunpy. The analysis of solar images is of paramount to the heliophysics community. Such analysis reveals various factors which affect the Sun and which in turn affect everything here, on Earth. Moreover, the surge in the popularity of python for various data analysis and scientific computing tasks during the past few years makes the need for such a library in python hard to overlook. This project aims at bringing the various solar image processing algorithms under the umbrella of one library.
Tokio provides an instrumentation API using Tokio Trace as well as a number of instrumentation points built into Tokio itself and the Tokio ecosystem. The goal of the project is to implement a library for aggregation, metrics of said instrumentation points and a console-based UI that connects to the process, allowing users to quickly visualize, browse and debug the data.
Because processes can encode structured and typed business logic with instrumentation points based on tokio-trace, a domain-specific debugger built upon those can provide powerful, ad hoc tooling, e.g. filtering events by connection id, execution context etcetera. As instrumentation points of underlying libraries are collected as well, it is easy to observe their behaviour and interaction. This is an eminent advantage over traditional debuggers, where the user instead observes the implementation.
I will be working on enhancing many functionalities of the current PSLab Android Application. Will also be working on adding functionality to connect to PSLab hardware wireless connectivity from android application.
Maxima is a computer algebra system, which has been growing for the past 40 years. However, given the growing use of Python in Neuroscience, under the need of a common platform for scientific computation and numerical capabilities, I propose:
Open Event is an application that allows users to find and book tickets for events. This proposal is about integrating major missing components of Open Event (adding missing API components and payment gateways for events) as well as improving existing components by adding features, polishing the UX/UI and improving the codebase.
Terasology uses a desktop launcher to help users manage different game versions and tweak additional settings. The launcher is developed using JavaFX, but the mode of distribution of JavaFX has caused multiple issues for users. This project proposes to enhance the launcher while continuing the use of JavaFX and embedding a JRE to provide a hassle-free experience for end users who don’t have it installed.
The current visualization shiny apps of PEcAn 'Workflow Plot' and 'Global Sensitivity' is facing problems including instability and slow responsiveness.
I suggest to improve stability and performance of apps by using lazy-loading to load large amount of model output data, improving performance of plots, caching plots within apps and improving app scalability with async programming.
SymPy basic stats module is fairly developed but it would be able to reach its true potential and find wide applicability only when advanced features like compound distributions, different stochastic processes, random matrices and ability to export expression to external libraries are developed. I plan to implement these features with others for the stats module.
The current importers and exporters in Blender are written in Python and take a long time when handling complex models, especially for meshes with millions of vertices, which aren’t uncommon. These operations can take several minutes to process, which creates frustration in users and decreases their productivity. This project will provide faster Import and Export operators for OBJ, STL and PLY formats (both ASCII and binary), achieved by porting to C/C++ and create a common import/export framework.
The goal of the project is to implement a plugin with various statistical models implemented underneath from which the user can choose the best one which works for them for the classification of their mails into ham and spam. Documentation regarding the merged patch and a tutorial to use the plugin was also created.
I chose the project ‘Theme Support for Rocket.Chat Android App’ because it integrates my two loves of android development and strong visuals and graphics. I am passionate about UI/UX design and implementation. Being a freelance graphic designer, artist and programmer, I think I would be a unique member to have on board the Rocket.Chat open source community.
I’m proposing to make a friendly but elegant GUI that will run on any browser on localhost or server to scan for license statements in any open source software. This can later be integrated with FOSSology GUI. New algorithms can be implemented for better and fast results. Various Approximate Nearest Neighbour(ANN) algorithms such as LSH, KD-Tree, Greedy Search can be implemented as proposed. Moreover the existing algorithms can be improved if required. Max voting factor to choose single output will be implemented as per the proposal
The goal of this project is to showcase in an easy and intuitive way the advantages and differentials of hosting a Dynamic Hydra based API, while delivering other standalone features for the Hydra Community. For that, a Web Application will be built that allows the user to have an UI to change and play with an Hydra API and also an Smart Client Console that he can use to send queries like ”show cities members” directly to the Dynamic API, all of that thinking of the user experience to make transparent what’s happening in the interaction even for people with no Hydra knowledge.
Currently there’s a steep learning curve into showcasing how Hydra Clients can leverage on the possibility of not hard coded endpoints. Thus, this proposal consists of showing how a Smart Client can create and interact with different Hydra APIs architectures and still recover the information needed.
The aim is of the project is to improve SNARE/TANNER over the summer. The major goals of the project are - Implement new emulators to support vulnerabilities such as XXE Injection, Template Injection, PHP Object Injection, NoSQL Injection. Other goals include increasing the code coverage of SNARE and TANNER, replacing docker implementation with aiodocker and improve logging in snare/tanner.
I am super excited about the project and I am sure this will be an awesome summer. Let's do this!
This project aims to build an ASR pipeline for European Language (German) and it must be built as a Singularity container on the Case HPC and put into production processing daily incoming files.
With an increased focus on the outliner in Blender 2.8x, many improvements could be implemented to increase the outliner's usability. Synced selection between the outliner and other editors, arrow key navigation, range selection, and other standard interactions will be implemented to make the outliner more intuitive. Other various UI tweaks, operator improvements, and menu organization will also be added in this project.
Crowdalert is a hybrid mobile application built on react-native, which helps in reporting, and viewing incidents around the globe.
The goal of this project is to develop a tool which will allow to visualise the events Directed Acyclic Graph data structure which describes the conversation history in a room. It will be a real-time visualisation of the DAG of a given Matrix room, as seen from the perspective of one or more HomeServers (HSes).
This tool will be useful for debugging or administration of Matrix HSes by making people able to easily see how the federation process works.
TARDIS is a Monte Carlo radiative transfer code whose primary goal is the calculation of theoretical spectra for supernovae based on a number of input parameters, such as the supernova brightness and the abundances of the different chemical elements present in the ejecta. The main idea for this procedure is that by finding a close match between theoretical and observed spectra the parameters that actually describe the supernovae can be identified.
The objective of this proposal is to incorporate new atomic data into the TARDIS database. In order to accomplish this job several tasks are required: parsers for different file types must be written, unit testing, full integration with TARDIS codebase and more. Finally, will be crucial to determine how new atomic data affects the synthethic spectra.
The result of this work will not only be of great value for TARDIS, but also for many researchers who require atomic measurements.
Using machine learning method to tune database configurations automatically
The TensorFlow.js will be highly computational consuming while we are doing prediction or classification. And if such a computation happens at the UI threads, it will block all the things until the computation is done. The above will results in a bad experience while using the web to render the results and so on. And it is natural to come up with the idea that is doing a multi-threads model that let the main thread(ui thread) run for rendering UI and listen to the results from other threads and let another thread opens when the computation is needed. This WebWorker is the base for the multi-thread model over the browser. We can use the WebWorker to achieve the paradigm that separates the heavy computation apart from UI rendering.
A project to help Linux kernel performance and/or security by analyzing and fixing race conditions in the Linux kernel
Nakade et al. proposed an approach to model checking which allows to reason about many possible program executions using only one program run. This project aims to further reduce state space being explored by considering symbolic graph instead of concrete one. Relations on such symbolic graph will be inferred using dynamic POR feature of JPF.
Agora Web is a platform for managing elections. It makes use of the Agora library which is used to count votes and provide election results. This project will focus on creating a slack application that will be used to conduct elections within slack channels using preferential voting algorithms supported by the Agora library. This application will further push the Agora library yet to another set of users who are the slack users. There are some few similar slack applications that provide such functionalities but most of them have a very short list of vote counting methods available if not one. The Agora library with its large number of voting algorithm will be used to create this slack application and consequently will provide its users with over 30 vote counting methods available.
Pursuing the goal of running Allpix-Squared simulation’s events -independent by nature- in parallel, have led to the identification of performance bottlenecks that prevent Allpix-Squared from fully utilizing available CPU cores and from scaling the execution time relative to the number of used cores. In this year GSoC, I propose to continue working on solving these bottlenecks, most importantly the Geant4 dependency by implementing a custom run manager encapsulated within Allpix-Squared that would fix the scalability issues and allow for running events in parallel.
AirMashup for Liquid Galaxy is a project that helps to comprehend how airspace is structured. Displaying in Liquid Galaxy airways, airports, flights levels and other elements that keep airspace structured is an attractive way to study these concepts.
In most of the cases around 80% of icons in a font are unused. This reduces performance of a web page unnecessarily. This project aims to solve that problem by allowing users to Select icons and generate custom font from them. Recurrent users can also import the Icons configuration file to continue building the same font from where they left.
Iregnet is the first R package to support general interval output data (no censoring as well as left, right and interval censored data) and elastic net regularization. The package has already been coded and optimized and now needs to be made error free, tested and documented so that it passes all CRAN checks. After this GSOC project, iregnet will be feature complete (cross validation method will be coded), well documented and available on CRAN
The Forms made using the Page Forms extension for MediaWiki can be used to add and edit template calls in the pages. This project aims to add a special page that provides a calendar interface that allows creation and editing for pages that call templates that contain one or more date fields. This interface will be built with the FullCalendar JavaScript library. Furthermore, a similar notion will be used to make an interface for pages with Coordinates fields which will allow users to create and drag around markers on a map.
This project sets out to achieve two goals. The first objective is to update the annotation system for Red Hen’s NewsScape dataset to FrameNet 1.7 using Open-Sesame and Semafor parsers. The second objective is to expand the lexical units, frames and frame-to-frame relations in FrameNet 1.7 through a knowledge-driven approach and a distributional semantics approach. The knowledge-driven approach uses BabelNet to induce the frames of unrecognized lexical units in the tagged NewsScape dataset. The latter distributional semantics approach uses Deep Structured Semantic Models (DSSM) to create word embeddings of lexical units (LUs) to resolve the inconsistency in FrameNet hierarchy, tag LUs with their missing frames, and locate new frames using SemCor corpus. If time permits, DSSM is used to expand the frame-to-frame relations with Entity and Event frames using ACE 2005 Entities and Events dataset.
Firefox supports a long tail of infrequently used image and audio formats to support the occasional website that uses them. Each such format requires the Firefox decoder to use a new open source library for parsing and decoding. This, unfortunately, increases the attack surface of Firefox and as we saw in Pwn2Own 2018, Firefox was successfully exploited via a bugs in such libraries (libogg in this case).
This project proposes to sandbox third-party libraries in Firefox by building a new software-fault isolation toolkit. Our toolkit will build on the WebAssembly compiler to isolate libraries in Firefox. But, as part of this toolkit we will also develop and apply a library for safely interfacing with sandboxed libraries (and sanitizing data coming from them). with this toolkit we can ensure that any vulnerability in third-party libraries (e.g., libogg or libpng) cannot be used to be used to compromise Firefox.
This project is a set of 4 more advanced improvements for the already existing mlpack KDE codebase.
They include improvements in cases where:
It also implements a new tree data structure called "subspace tree" which is a dimensionality reduction tree aimed at improving computation at high dimensionality.
The goal of this project is to enable the Trace Compass to analyze and display some basic information using Event Recording infrastructure. Trace Compass is a software for viewing and analyzing any type of logs or traces. The basic information to analyze and display information may include CPU usage, IRQ analysis(IRQ Statistics, IRQ Table, IRQ vs Count, IRQ vs Time), Linux Kernel(Control Flow, Resources) etc. The advanced support for Trace Compass could include dynamic memory traces, stack usage, network packet flow etc.
If successful in the generation of trace data(LTTng) then, it can be transferred via TCP/UDP(TCP already available) method from target running RTEMS application to the host.
I would like to contribute to the mobile department of Amahi by adding new features to their iOS application and improving the user experience. I have many ideas to make the app look even better and to make it more user friendly.
I would add a walkthrough screen to the app where the user can learn more about Amahi's services and also other features such as upload and delete functionalities. I would make it possible for the user to share files easily too via third party apps such as emal, iMessage or Facebook.
I also believe that Chromecast support would be greatly appreciated by many users as well as improved searching and filtering features would be very useful too. I would love to work on these new features to provide an even better user experience to Amahi's users.
At the present stage, Apache OODT provides a web app to monitor the status of each component and ingested files, metadata and workflows. This main dashboard is known as OPS UI which is based on Apache wicket java web framework. Though it provides basic monitoring functionalities like retrieval of product status, metadata,workflow and platform health information etc. still it lacks of few important features like querying over products, product removal and workflow termination etc. Further the existing UI is not so user friendly and prints stack traces when backend errors occurred. Moreover users have to deploy the complete OPS UI, even whether they are embedding components individually in their applications. The intention of this proposed project idea is to address all those loop holes by implementing a new component based React UI with enhanced REST APIs. Moreover the implementation of the idea discussed here, is decided to be released with Apache OODT 2.0.
I am highly interested in APP4MC Topic 5 (CPU-GPU Response Time and Mapping Analysis), and would like to contribute to this topic with my experience in Response Time Analysis. With my passion in Java implementation and analyzing Amalthea models in APP4MC platform, I believe that there will be a positive outcome that would deliver Eclipse community’s growth with my personal achievement.
As Software Heritage works on archiving and sharing source code before becomes extinct. One of the major task when in the to ingest the latest source code available in their database from time to time and from all the possible sources where you can get code. There are a lots of sources where a person can get the code, so I want to work in the project which does the same. I want to implement new listers and loaders to increase the archive coverage so that the organisation can fetch source code from as much sources as possible and preserve it.
OpenCine is raw processing suite in development by apertus°. This task is about providing frameservering capabilities to OpenCine which enables it to give a real time viewing of RAW inputs using advance video editor like VLC. This task also involves implementing features like reading multiple frames of RAW video and providing support of various RAW inputs to OpenCine using Libraw library.
Add clang-format configuration ‘NetBSD’ reflecting KNF (Kernel Normal Form) NetBSD style and upstream it to the LLVM project. Find and specify missing style rules in clang-format based on the /usr/share/misc/style file. Find missing clang-format rules demanded by the NetBSD style and implement these rules in the upstream tool, and integrate it with NetBSD distribution, shipping it with MKLLVM=yes.
Our React native implementation is relatively newer than its native counterparts hence lacks many of the important features and is more buggy. The aim here is to implement some of the important features that the application lacks, and also side by side replacing some legacy libraries with a new one to increase performance as well as user experience and last but not least resolving discovered as well as undiscovered bugs. I think the most important feature that app lacks is the ability to fully share files. It can right now download and view already shared files but there no way to upload a new one. Second, an important feature that really makes Rocket Chat standout from others are slash commands, It allows the user to do trivial actions, like invite a user or archive a channel in fewer tabs Third, switching from react-native-action-sheet to react-native-reanimated-bottom-sheet to increase performance as well as user experience And last but not least I will be adding ‘About’, ‘Licence’ and ‘Contact us’ to settings sections hence increasing its functionality
In the past two years since the projects around decoupling Open Event’s frontend and backend began, significant progress has been made in developing the new Frontend and an API centric backend. However, the entire system in its current state is not yet ready to go into production. Along with minor bugs, there are areas like ticket sales, orders and role invites which require a substantial amount of work before the system becomes stable enough to act as an independent product which can be marketed to other Open Source Organisations. Eventyay is FOSSASIA’s production deployment of Open Event. To replace the stable legacy version with the new system will require several code related as well as devOps configurations. In my GSoC project, I intend to take the project to a stable production ready state with the exact details outlined in the proposal which follows.
Collaboration is ingrained into human nature, without it we, as a species, wouldn’t be able to build astonishing buildings like we have today. Fundamental to collaboration, however, is communication, which enables the coordination of big projects. To make the work on a project easier we use tools like FreeCAD. It offers possibilities to model objects, like buildings, but not exclusively, that shall soon be built. For that purpose FreeCAD supports BIM (Building Information Modeling). This project aims to integrate the BCF (BIM Collaboration Format), which is designed to communicate issues/topics with a particular model. These issues/topic can then be directly visualized in the design tool and don’t have to be searched for in the model, based on a description in a PDF file.
Improving the overall platform with the aim to increase the user-base. Involves several feature additions including featured circuits and search engine, along with improvements of some preexisting features like group assignments.
LORIS, or Longitudinal Online Research and Imaging System, is a research data platform (github.com/aces/LORIS) for neuroscience studies. It is a web-based and open source framework and it assists with data collection and data sharing across sites. LORIS hosts frontend services that allow researchers to view, manipulate, and share data within the platform. It also has backend services, implemented with RESTful API, that allows for data sharing.
Managing neuroscience data and databases is a very important issue so that research can move forward. In order for LORIS and other data management platforms to run smoothly, there needs to be a very well-managed set of tests in place with very good test coverage.
The project idea aims to increase the test suite of LORIS to improve test coverage and further improve the LORIS platform. This will be accomplished by both improving on the already existing test suite and also adding to it. The goal will also be to improve the maintainability of the code implemented. Improving LORIS’s test coverage will do much to help the LORIS developers and users and will better the platform’s usability and effectiveness.
A more user friendly interface for the Special: ViewData page in Cargo extension.
| Time | Task |
|---|---|
| May 7 - 27, 2019 (three weeks) | Bonding with Red Hen mentors; A full discussion; Getting familiar with datasets and tools; |
| May 28 - June 10, 2019 (two weeks) | OCR data cleaning; Dataset making; ASR upgrading to Deep Speech 3; |
| June 11 - 24, 2019 (two weeks) | Model training; Fine-tuning; Integration into Chinese pipeline; |
| Milestone 1 | Get the improved ASR and OCR demo running; |
| June 25 - July 8, 2019 (two weeks) | Testing; Documentation; Deployment inside Singularity on the Case HPC; |
| July 9 - 15, 2019 (one week) | Collecting and preprocessing data for NLP tasks including word segmentation, POS tagging, NER, sentiment analysis; |
| Milestone 2 | Accomplish the ASR and OCR tasks; Starting on rest NLP tasks; |
| July 16 - 29, 2019 (two weeks) | Working on NLP tasks; Get a running demo |
| July 30 - August 13, 2019 (two weeks) | Testing, documentation and deployment for NLP tasks; |
| August 13 - 20, 2019 (one week) | Flexible time; Might write a summary; Discussion with Red Hen mentors for interesting future work; |
| Deadline | Accomplish all tasks |
My goal is to implement design policy which will dictate how memory is accessed and how dimension is specified for example dynamic dimension or static dimension and which device or execution policy to used, for example GPUs, CPUs , etc. Try to shift to mdspan or similar design
This proposal consists of a few tasks that implement new features for Kapitan which I believe will add value to the software. Task 1: dependency management With this feature, kapitan users will be able to fetch sources via HTTP[S] and/or git during compilation into specific targets. This will allow them to manage online dependencies much more easily.
Task 2: support for Helm chart import This will allow existing Helm users to import their charts into kapitan projects and render the charts together with the values set in the kapitan inventory to compile into a target. This integration will make it easy for users to switch to kapitan from Helm.
Task 3: kapitan binary Kapitan exists as a pip package to be used in CLI and as a Docker image. Users have pointed out that having a portable binary for Kapitan would be very helpful. This task aims to meet such needs by packaging kapitan into a static binary with the use of third-party build tools.
Task 4: json schema validation for k8s Kapitan, just as it has its own channel in k8s slack community, is used by a large number of k8s users. This feature enables pre-checking of compilation output of k8s configuration files.
To help provide a foundational user experience before OGV releases on the web, I want to help support more device resolutions and make sure its responsive on all devices. I would really like to dig into adding new CSS style breakpoints on the mobile UI, and improve the accuracy in the initial login flow alerts to make the initial experience more enjoyable.
Once the basic responsive foundation is in place, I suspect users will enjoy the ability to quickly visualize models in different colors Hex/RGB color codes. Their may be potential for users to be able to assign unique materials and textures to different 3D models and BRL-CAD designs utilizing .MTL files in combination with OBJ files.
Currently adding a progress bar that indicates model upload progression and overall loading times seems like a wonderful addition. Alongside contributing a loading bar component, ensuring network disconnections are notified on the client in a comfortable and informative way is very important. Constructing a custom javascript loading component and perfecting it over time would be excellent for OGV!
At the moment it’s difficult to customise webpack’s output and it requires a deep knowledge of the internal workings of webpack and webpack-cli. The reporter implementation would cover:
Converting the current template toolkit setup to ReactJs and then adding edit previews for non-release entities in musicbrainz for something similar to release entity
Draft proposal for project DroneSym of SCoRe Lab which mainly focuses on building an database abstraction layer with the documentation.
This project is about development of Long Short Term Memory(LSTM) and Gated Recurrent Unit(GRU) layers in TMVA, both of which belong to a general class of neural networks called the Recurrent Neural Networks(RNN). These layers have many important applications in the realm of data analysis for particle physics experiments. As an example, LSTMs can be used for track reconstruction of charged particles in the Large Hadron Collider(LHC). They can also be used for analyzing the voltage time series from the electronic monitoring system present in superconducting LHC magnets.
Create Write Activity for Sugarizer
LibRoadrunner is a high performance SBML based simulator that uses LLVM to generate very efficient runtime code. This enables LibRoadrunner to simulate models on par with compiled C/C++ code. By combining Libroadrunner with standard optimization algorithms it is possible to use Libroadrunner to fit models to data. At present this is done by writing code to link the standard Python optimizer available via scipy with Libroadrunner. Although this works, it is inefficient and for large models it is not practical. In this project we would like to develop a C/C++ based optimization library that can be used directly by Libroadrunner without having to go via Python. This would enable us to provide high performance optimization capabilities.
This project is to rewrite the widgets in The virtual brain user Interface to become reusable components which can be employed from a Jupyter notebook for use in the Human Brain Project collaboratory
This project will focus on improving the unit tests as well as creating a more complete unit test coverage for p5.js. This project would also include creating a tutorial for new contributors, covering the basics of unit testing for p5.js and how to write and add them.
The project aims at adding additional functionalities to Group Theory section of combinatorics module. Implementing functionalities like computation of Composition series, Abelian invarients, Polycyclic group, Quotient group, Intersection of subgroups, Hall subgroup, Modulo-pcgs and Group Automorphisms.
OWASP DefectDojo is a popular open source vulnerability management tool and is used as the backbone for security programs.It helps you keep proper record and management of your Product Testing Engagements for easy review and recall. It is easy to get started with and to work on!
This project aims to develop a novel method of automatically synchronizing the lyrics of a song to its corresponding audio, i.e. a line from the lyrics is displayed at the corresponding moment in the song in which that line is sung. The primary method through which this synchronization will be done is through syllabic analysis. Applications exist where the number of syllables spoken in an audio file can be extracted without processing the words themselves, resulting in much faster runtimes. By similarly analyzing the number of syllables in a chunk of text, a process for which several very good algorithms exist, we can roughly match the timestamp of each syllable in the audio file to the corresponding syllable in the lyrical text. This process would take place in a separate module than the actual SwagLyric application so as to facilitate speed of retrieval when the user is actually streaming. Several obstacles, which will be discussed in more detail, exist: these include stripping away background music and getting specific audio information from Spotify’s stream.
Virtual memory compression is a memory management technique implemented by multiple operating systems ranging from Windows to Darwin. Depending on implementation, it can either reduce swapping to a physical storage medium or completely replace it. In vast majority of systems, compression and decompression is generally known to be faster faster than disk IO. The main goal of this project is to implement a compressed in-memory disk that can be utilized as a swap storage, much like zram on Linux.
Provide more convenience for organizing and retrieving sketches within an individual account through a search bar and a collections tab.
The website would have a dynamic page for each port which would display
Port Information: Name, Description, Version, Maintainers (github and email), Dependencies, Long Description, Homepage, Installed Files etc.
Installations Statistics: Number of installations, Installations vs Month, Installations vs Version for last 12 months, number of updates vs month.
The API would make the collection of data more powerful by extending its use beyond the web-app.
Although these will be the main components of the app, yet there will be many other accessibility features like maintainer’s page, customisable table of all builds etc.
The Kubernetes dashboard previously supported Third Party Resources (TPR), but these were replaced in Kubernetes by Custom Resource Definitions (CRD). As a result, the original TPR support was removed in Dashboard, but CRD support has not been added yet. This proposal aims at providing a generic support for Custom Resource Definitions to the dashboard, similar to the previous TPR support.
NetBSD currently has good support for Allwinner ("Sunxi") family SoCs and some support for SoCs from Amlogic ("meson"), Rockchip, and NVIDIA ("Tegra"). This project is aimed at extending NetBSD support to Hummingboard Pulse. Hummingboard Pulse has i.MX8M Dual/Quad core ARM Cortex processor marketed as a modular fanless mini computer, to be integrated with larger networks or systems, especially in the area of IOT development.
The project aims to grow the TensorFlow.js model garden with five new low-latency, low-power, high-accuracy applications controlled via an interactive dashboard, providing a proof of concept for the mobile-first paradigm of machine learning.
The Gazebo project has a vast set of learning resources in the form of a documentation section, Gazebo tutorials, the QA website, ROS answers and other blogs that developers can refer to for any assistance. All of this information is distributed across the internet with some links joining each other. The aim of this project is to bring all the learning material under one webpage in the form of a documentation index that contains links to the content where the respective information is hosted. Almost all relevant information in such documentation indexes is just a page quick-search away. Such a platform can act as a one-stop place to get all pertinent information about Gazebo. A roadmap has been provided towards the end of the proposal to share information about how the project can be approached and developed in a planned fashion.
CC Vocabulary is a collection of UI components, available both as CSS stylesheets and minified JS, as well as Vue components, that make it easy to develop Creative Commons apps and services while ensuring a cohesive experience and appearance across CC projects.
CC Vocabulary would make it easy for designers to design and prototype mockups, developers to develop evolving standards-compliant code that covers a large number of use-cases out-of-the-box and users to navigate a more consistently familiar CC web presence.
CC Resource Archive is an educational tool by Creative Commons that hosts a lot of informational material pertaining to creative licenses in general and CC licenses in particular. It hosts all types of content, such as infographics, slides, text and audio-video as well.
The revamped archive would provide a more streamlined educational experience, making the site mobile-responsive, modernised and smoother to use. The website would also be the pilot project to use the new CC Vocabulary components, enabling faster development and a more consistent look and feel.
Zulip Terminal is now being actively developed. Also, the user base for it has been steadily growing; but most of them only use it as a temporary client. So, this summer my goal would be to convert Zulip Terminal from being a temporary client to a primary client by implementing the necessary features.
The Internet Archive is a non-profit library committed to Universal Access to Knowledge. In its 23 years of operation, the Internet Archive and its community have archived millions of web pages, books, texts, audio tracks, videos, images, and software. These items are made freely available to the public to consume and repurpose through the Internet Archive’s flagship website, Archive.org.
However, in the midst of the Archive's archiving efforts, it is imperative that users are able to easily navigate through the website's content and find what they are searching for, regardless of the device used or disabilities that they may have to contend with.
Thus, the following project aims at improving Archive.org's navigation by focusing on the search results page and navigation bar.
The steps required to be undertaken are as follows:
To monitor the live-execution of VMs, EPT has been shown to be very effective, for example as discussed here. LibVMI currently supports the memory access monitoring only on Xen. This project will explore implementing memory access monitoring using Bareflank, and its new derivative, Boxy.
The task will require implementing new hypercalls to set EPT memory permissions, as well as developing multiple EPTs and EPT-switching within the hypervisor and then extending LibVMI's Bareflank driver to support these new features.
The idea is to improve the Clang Static Analyzer so that it is useful for developers who work on Clang and LLVM themselves, as well as on other LLVM-based projects, like Swift. LLVM makes use of C++ language features that the Static Analyzer has not yet been taught to understand. The analyzer also has false positives on some C++ idioms commonly used in the LLVM codebase.
Cloud-init configuration through virt-install/virt-manager input arguments at VM initial setup.
OpenCV.js is a JavaScript binding for selected subset of OpenCV functions for the web platform. It allows emerging web applications with multimedia processing to benefit from the wide variety of vision functions available in OpenCV. OpenCV.js leverages Emscripten to compile OpenCV functions into asm.js or WebAssembly targets, and provides a JavaScript APIs for web application to access them.
However, now the performance of OpenCV.js still have a big gap with Native, and it can't support real-time tasks very well, such as face detection and face recognition. The biggest reason is that the current version of OpenCV.js runs with single thread and no SIMD, which greatly wastes the parallel computing power of the CPU.
But at this time, WebAssembly can reduce the performance gap between Web and Native. WASM now support multi-threading with Web Worker and shareArrayBuffer, and is going on supporting new v128 value types used for SIMD, which can both improve the parallel computing capability on Web.
Therefore, the main goal of this project is to speedup OpenCV.js by multi-threading and SIMD.
In statistics, linear regression is typically used for modelling relationships between predictor variables and a response variable. In particular, by determining the strength of the relationship between predictors and response variables, linear regression algorithms can explain variation in the response variable, which can be attributed to variation in the predictors, allowing the identification of variables and/or subsets of data that contain relevant information about the response variable. To date, linear regression functionality in the premier toolbox for analysing neural time series in Python, MNE, is capable of handling designs mostly characterised by the introduction of categorical predictors, to explain variation in brain activity, based on ordinary least squares estimation. The goal of the GSoC project is to extend the functionality and inference options of the linear regression module in MNE-Python by providing a set of functions for specifying and testing more complex variants of the linear regression framework, that are commonly used by the neuroscience community. In addition, the project aims at validating these tools on open data resources.
Integrate WireGuard tunneling in BMX7 routing protocol. Purpose of this integration is to provide a more secure cryptographic way of communication over IPIP. Initially binary calls to WireGuard itself will be used and the functionality will be supported through a new plugin. Following work will try to combine the current tunneling plugin with WireGaurd tunneling and apply post-quantum crypto on the public keys.
This project proposes a simple yet effective open web app (OWA) to the pre-existing Patient Flags Module using ReactJS. The project focuses on usability and robustness while maintaining the standards of the Reference Application.
OpenMRS has come a long way in improving healthcare facilities worldwide through Open Source Software. Earlier these software were only web based but now with the advent of mobile technologies our mobiles phones are becoming capable enough to do any task which can be done on the desktop but rather with more flexibility, mobility and with just a few touches. Thus the Nigeria Telemedicine App will standout as another example that helps overcome the day to day struggle of manually documenting everything or operating on large desktop systems. Patients can avail fast medical services by register themselves through this mobile app mentioning minimal details about themselves and their ailment.The app then informs the patient regarding when can he expect a call from the doctor. The app also provides an emergency feature that helps the user call an ambulance and provide information about performing CPR, choking prevention maneuvers , stopping excessive bleeding, delivering a baby etc.
MBDyn is a multibody dynamics solver which comes without any default graphical user interface for pre- and post-processing. This project is aimed at kick starting the development of an addon to a popular CAD software which can be used a pre-processor to MBDyn simulations. User can model the problem using the CAD software's python API which will be then transformed to a standard MBDyn input format file. This project is to start from scratch and is proposed to generate an MBDyn input code of at least a very basic CAD model, and code structure and documentations to help further developers contribute and complete the pre-processor.
● Implement OpenGL(ES) back-end for RetroPlayer’s shaders
● Possibly provide a way to allow users to use any preset they want
This project is to rework the Qt interface of VLC on Windows and Linux to make it both more beautiful and more useful. The redesign would likely move the UX towards that used by the UWP version of VLC to make it simpler and more user friendly. This project broadly consists of two parts: designing the new UI/UX, and building it using Qt. The redesigned interface would be built in Qt 5 with Qt Quick and QML. Most interface requirements should be covered by Qt Quick Controls, which provides us with a range of supported, good-looking controls to build with. This also allows us to use the Universal Style to easily make VLC fit better with modern Windows, and/or use other styles on Linux. Additionally, using Qt Creator and Qt Quick Designer should allow us to create the new interface with ease.
The aim of this proposal is to extend the functionalities of the new license requests submittal by adding checks for duplicate license requests and checks for near matches of licenses which would greatly improve the efficiency of the approval process of the license. This project will take some workload off the SPDX legal team as they won’t have to take care of duplicate license requests. Thus, the project will make the license submittal feature more robust.
This project aims to make OODT deployment and configuration management simple through implementation of a docker-based deployment tool that integrates with the existing distributed configuration management tool in OODT. The target is to first containerize the major components in OODT (file manager, resource manager and workflow manager) using Docker. Each component will then have its own Dockerfile and Maven build execution using the dockerfile-maven plugin. Next, the initial OODT deployment tool will be created using Kubernetes. Once this is done, the existing OODT Distributed Configuration Management feature will be integrated with this tool to handle dynamic configuration management. As a later step, the deployment could be upgraded to work with Helm or KNative.
Through this project, OODT will gain the ability to seamlessly manage OODT components regardless of the deployment environment. Technologies including Docker, Kubernetes and Maven will be required to carry out this project.
Direct Rendering Manager subsystem has pretty elaborate ioctl interface, and it might be useful to be able to support its decoding.
This project is aimed to build an efficient BAM format to ADAM format converter utilizing Sambamba library base.
DFFML provides APIs for dataset generation and storage, and model definition using any machine learning framework, from high level down to low level use is supported. As the goal of DFFML is to build a community driven library of plugins for dataset generation and model definition, so that developers and researchers easily plug and play various pieces of data with various model implementations or generate datasets using the implemented features to increase the accuracy of output. For this, DFFML needs to implement large number of machine learning models as well as various features. I have planned to add the below listed Models/Algorithms to DFFML.
One of the most defining features of Jenkins is its extensibility via plugins. Although plugins can be managed and installed from within a live master instance, many administrators would like to be able to control which plugins and their particular versions are installed before the Jenkins Master starts. There are currently many reincarnations of plugin management across Jenkins; the goal of this project is to create a new Plugin Manager CLI tools and a library which would unify the plugin management across the different implementations.
Turing.jl is a probabilistic modelling language in Julia. Currently only exact methods based on MCMC and Sequential MC are supported in Turing.jl. Unfortunately, exact inference is not always feasible and thus it would be nice to also have available approximate inference methods. This project will focus on implementing a specific class approximate methods known as Variational inference (VI).
The outcome will be:
This will be a step towards bringing Turing.jl on feature-parity with other well-known Bayesian inference packages in other languages, e.g. pymc3 and stan.
Moreover, if time permits, we will explore relaxations of the mean-field approximations used in standard VI by introducing mixture-components and low-rank structure to the covariance matrices.
Intermine is a powerful data warehousing, integration and analysis tool used to store and share genomics data. However, setting up an instance of Intermine is a time consuming and error prone process. It also requires technical knowledge and some familiarity with Java, Postgres and shell scripts. These issues create a barrier for entry and friction in the adoption of intermine by the bioinformatics community. To solve these problems, Intermine team is planning to create a cloud platform that offers managed Intermine instances. This will remove the technical burden from the user and greatly simplify the creation of intermine instances. Also, the work done on Intermine cloud can easily be translated to simplify the creation of Intermines locally. This project forms a part of the Intermine cloud project. It includes packaging of intermine and it’s components in docker containers, cloud infrastructure setup, orchestration of containers, authentication, and authorization on the cloud.
The project involves the redesign of Firefox Reader as well as working on the functionality related issues the current reader is facing. The redesign of the Reader has to be done so that it complies the Photon Design System and also offers a better user experience to it's users. Thus, using the knowledge of JavaScript, CSS, and web experience, the project aims at revamping the Reader View considering all the three reader modes ie. Light Mode, Dark Mode and Sepia Mode to enhance the reader's reading experience.
The spatial extension for SBML provides support for describing processes that involve a spatial component. This project seeks to implement validation functions for the spatial modelling package for SBML, thereby updating the extension to the latest specification. These functions will be used by the SBML offline validator to validate any models that contain a spatial component.
Sustainable Computing Research Lab has done researches and project to identify elephants from images and tag them. LabelLab projects is a generalized extension of this. LabelLab focus on creating an mobile and web application to classify and label all kind of animal without being specific to elephant in order satisfy more use cases. LabelLab mobile app’s motivation is to allow user to use the classification model in field using their handheld device. The mobile app will have features take a image and classify it using the classification model and show the necessary information.
Currently the classification model is already build. But both the mobile client and backend to integrated mobile client and classification model has to be built from scratch. This is a new project, So requirements of the project should be identified precisely and should be planned for future scalability of the project.
I wish to implement the mobile client in Flutter for both Android and iOS and the backend in Node.js.
The current Intermine’s viewer interface works on JavaServer Pages Technology and is planned to be discontinued in a few years. And, a new interface namely BlueGenes has already been developed. So, to allow developers to easily develop tools (any type of tool, visualisation, tables or anything that can be integrated on the gene / protein report page) for BlueGenes (the new interface), Intermine released a BlueGenes Tool API which defines specifications on how to build BlueGene compatible tool in JavaScript. Currently there exists two such tools - protovista and cytoscape-interaction-network-viewer. The BlueGenes ClojureScript application actually queries for all tools available before sending the report page to frontend and integrates the ones found. These tools are actually javascript applications following the specifications provided by the BlueGenes tool API and receive some initialisation input from BlueGenes like gene id in case of a gene report page. The purpose of this project is to develop BlueGenes compatible visualisation tools which can help biologists and explorers have better insights via the report of a gene / protein on the report page.
Carbonfootprint is available on android play store and this project tries to make this app more accessible by implementing its IOS counterpart and also completely redoing UI to make feel both apps more native.
This document describes the proposal to "add traditional Machine Learning algorithms to the Swift TensorFlow library", for "TensorFlow" organization. This proposal aims to add traditional Machine Learning algorithms to the Swift TensorFlow library, So that the Swift for TensorFlow library will not only focus on Deep Learning but support overall Machine Learning. This will help the user of Swift for TensorFlow focus on application development rather than implementing state of the art Machine Learning algorithms.
Detector simulation consumes most of the High-Energy Physics computing cycles, and even so, experiments have to take hard decisions on what to simulate, as their needs surpass the availability of computing resources. It is therefore necessary to explore innovative ways of speeding up simulation in High-Energy Physics. Thus, the GeantV project aims to develop a high performance detector simulation system integrating fast and full simulation, but also that can be ported on different computing architectures, including accelerators.
This project aims to develop an image finder for Debian Cloud Team.
As part of the project, I intend to build simulations and demos along with introductory animations about exoplanets, their descriptions and the methods used to detect them while highlighting the Transit Light Curve method. Proposed simulators and demos other than introductory animations:
TensorBoard is a TensorFlow tool for models visualization. It can generate histograms, graph and help debug and improve the neural network. To help the community, this proposal aims to produce code examples, guides, and tutorials to expose the advantages of TensorBoard and show how to extract relevant information about neural networks with the tool.
The main purpose of this project is to make the text-editors of coala more robust and active. This will be done through ensuring that the plugins are up-to-date with proper tests and continuous integration, to make maintenance more simplified for contributors in the future. This project will be completed through the use of several open-source tools. Further, the completion of this project will lead to other developers having the freedom to use coala in their favourite text editors. The ease of usage in different text editors may also help potentially serve as a way to popularize coala.
Building an interactive robotic simulator in order to simulate the complex real world. As real worlds are dynamic in nature we need a simulator where these type of environment can be provided to the robot.
Implement probabilistic feature extraction methods and model-specific feature extractor such as deep learning in TMVA
The goal is to have users send real time reports via USSD (Unstructured Supplementary Services Data), this will be integrated with the Ushahidi Platform v3. Reports sent can be seen on the platform for further analysis.
This project aims to improve backend test coverage to 100%, and then migrate the backend codebase to be simultaneously compatible with both Python 2 and Python 3, while putting measures in place (like lint checks) to ensure that the backend code always remains compatible with both python 2 and python 3, regardless of subsequent developer changes. The reason these two projects are linked is because one prerequisite for a safe migration is to have full test coverage, so it's important to make sure that the backend coverage is 100% before migrating.The project would then make sure that all libraries Oppia uses are compatible with python 3. This project would also standardize all scripts in the codebase to be written in Python (currently, there is a mixture of bash and Python being used). The project would end with creating a small list of remaining steps that need to be taken for a final migration to python 3 (once a solution is found for the GAE dependency issues).
Adding
The test suite would ensure that the HTTP/2 support for Wget2 is flawless.
The susi.ai web app repositories have a very mature codebase as of now. I have invested most of my time fixing old bugs, refactoring the codebase and optimizing UI components. But few of the major features are still not functioning properly. We are also falling behind as far as library versions are concerned. We also need to implement new features in order to compete with the proprietary competitors. I propose to fix the preview system and the bot framework. I also intend to complete the conversation view in the CMS app. I will also work on putting up an API rate limit and access control. A shift from the old theming system to the MUI based theming system is also needed. I will finish up with the issues I have raised in the SUSI account repository as well regarding the use of styled-components and redux library for state management. While working on it, I will also upgrade the ReactJS and MUI versions as mentioned below. Finally, I will improve the documentation and do thorough testing of the newly implemented and fixed components so as to ensure product stability.
LLVM functions, as well as arguments and other entities, can be tagged with several attributes such as the function only reads memory, or the function cannot throw exceptions. These attributes are used by many optimizations when deciding if a particular transformation is valid or not.
Being an open-source operating system, Android is more vulnerable to attacks. In this proposal I present the ideas vis-à-vis improving cuckoo sandbox for Android malware analysis and supporting recent Android versions.
In addition to the already implemented file transfer via HTTP upload, add support for peer-to-peer file transfers via Jingle.
DefectDojo is a security tool that automates application security vulnerability management. DefectDojo streamlines the application security testing process by offering features such as importing third-party security findings, merging and deduping, integration with Jira, templating, report generation and security metrics.
This project target to implementing Scan2.0 for DefectDojo. Scan 2.0 consists of automating the scanning orchestration within DefectDojo.With Scan2.0 we can launch scan for tools like Nmap, Zap, Nikto etc within DefectDojo. And Writing Unittests for tools to ensure that they working correctly.
In the previous documentation of fineract API, the source was simply an HTML file which was maintained manually in parallel to the source code which defines REST API and therefore out of sync. To overcome the problem, Swagger documentation which is an automated tool to document REST API was introduced during GSoC. The majority of conversion of the API doc was completed and finalized during GCI 2017. However since 2017, new APIs have been added. This project aims at adding swagger document for new APIs, updating swagger document for the current APIs, improving the Swagger UI and automating the Swagger Documentation.
Feature completion for CS API, fixes and improvements to Dendrite and its related projects, and (optional) more unit tests for the project.
The Kalman Filter is a method of iteratively predicting the future state of a system based on previous information. Not only is a Kalman Filter more reliable about predicting future state than traditional extrapolation techniques, It also provides a confidence for the estimate. A Kalman Filter is used both to reduce the impact of sensor noise on estimations, and to determine which sensors can be “trusted” more than others. Whereas more primitive methods for estimation and extrapolation rely on some form of averaging, a Kalman Filter forecasts by developing a weighted covariance for each sensor input.
The aim of this project is to implement a Kalman Filter in Rust. Rust has gained popularity for providing more compile-time checks than other systems-level languages, namely C and C++. Rust’s memory model ensures that there is little to no room for many of the memory pitfalls common in other low level languages, such as double-freeing memory, dangling pointers, and user-after-free errors. This, in conjunction with high runtime performance, leads writing components of a codebase in Rust to be favorable for both speed and stability.
Livechat is a package that adds the ability to embed a pop-up support chat to your website. For example when you want to interact with users visiting your website and help them with their problems, then this feature of RocketChat helps you with that. All you need to do embed a script on to your static website. Live chat agents who wield this tool use it to make customers happy and satisfied.
This project adds real-time monitoring to livechat which will be an important feature as this will be helpful for livechat agents in the following way:-
LANPR is a fast and accurate 3D NPR feature line rendering engine developed for Blender 2.8 NPR branch. This project is aimed to complete the rendering workflow and make it more production ready. If time permits, more functions might also be implemented, such as to integrate LANPR with existing Freestyle line modifier compatibility.
ChainerX is the next generation of the Chainer framework completely based in C++. It must therefore possess support for all Chainer routines and other important routines that have for instance, been defined in NumPy and SciPy. The project Expand ChainerX Ops aims to bridge this gap in routine definitions by implementation of the operations to ChainerX.
Packaging the scancode-toolkit for Debian,RPM and creation of docker image along with the appropriate submission to respective repositories
Automated layout web service with an integrated converter, to change the format of the data, would be of great use for people who want to quickly and easily layout their data to some of the famous cytoscape.js layout extensions
The purpose of this project is to verify the convergence of the training algorithms provided in 69 Neural Network R packages available on CRAN to date. Neural networks often must be trained with second order algorithms and not with the first order algorithms as many packages seem to use instead.
Due to the large number of packages to validate, the work has been split among two students. Being Student 2, I will validate 34 packages and prepare one article to be published in the R-Journal. At the end of the program, a package will be made available to Neural Network package authors and maintainers to verify and test new algorithms by themselves.
The results of this project could be used to make better neural network packages in the future, improve the ones currently being used, or simply know how the neural network packages actually perform.
UCSC Xena is a functional genomics visualization and analysis platform. For this (genomics visualization and analysis), there are many datasets which are provided by data hubs available with Xenabrowser itself which users can use. The data hubs are: UCSC Public Hub, TCGA Hub, Pan-Cancer Atlas Hub, ICGC Hub, UCSC Toil RNAseq Recompute, Treehouse Hub and GDC Hub. Our discussion of interest lies in GDC Hub.
GDC (Genomic Data Commons) Hub fetches data from the GDC repository: https://portal.gdc.cancer.gov/repository via the GDC API. However, the data on the Xena is not updated. Current Xena data was updated on release 10, roughly 1.5 yrs ago. This project mainly revolves around updating data on xena with the current release (release 15). This will also simplify the process of adding data to Xena removing the XML files as a source.
Real serves to tackle the pervasive problem of inaccuracy that arises from floating point arithmetic. However, there is still much to be done to get to a peer review ready state. I propose to optimize the memory usage of Real. By changing the way real_explicit stores numbers, we can reduce memory usage of real_explicit by an order of magnitude. Utilizing std::variant, we can reduce the memory used by a real number to that of either a real_operation, real_explicit, or real_algorithm. I propose to better the way Real deals with operations, in particular, by simplifying sums of multiples of the same number. Further, I propose to implement benchmarking using Google Benchmark, to allow direct comparison between Real and other representations. After this, Boost.Real may be ready for peer review.
The main objective of this project is to create such interfaces (by the means of various visualizations of the library) that makes it easier for user to:
To achieve this, I propose to develop several adequate visualizations by enabling interactivity and UI controls, such that they serve as analytical web interfaces through which user can access what they need in a couple of clicks! These interfaces will be made accessible to the user by integrating them within living Sphinx docs, such that they will be auto generated when building the docs.
Besides, I also aim to well document the wsynphot package after integrating the developed interfaces in it, ultimately reshaping the entire package. Hence, user can auto-generate both the filter curves & photometry directly from the docs as per their requirements, by using these responsive interfaces.
Implementing multiplayer support to the Moonbase Commander game.
This project aims at creating SUSI_LINUX project as seamless as possible.
The current state of Question Tool client side app is in functional state, but there are both, a large number of issue remaining, and an equally large number of required features to be implemented. A user can post an instance and others can only interact by either reacting on the instance (Like the instance), or by posting an answer to that instance. But, Question Tool currently has very less user interaction because of no notifications or emails about what is happening with any instance idea or with their replies. Question tool’s user experience is also not upto the marks as per now, which can also be the possible reason for some users less collaboration. The project also lacks some features that will be a lot helpful to the users.
The Creative Commons plugin for WordPress is due for an update for over two years. I plan to rebuild the plugin from scratch with WordPress Coding standards in mind. This optimization will come in the forms of both standards compliant code and documentation.
I am proposing the following features which will make the plugin more usable, practical and up-to-date. You'll find the detailed implementation of each proposed feature below.
With the above-mentioned features, a user installing CC WordPress plugin will be able to add the Creative Commons Licenses to images, content, or even generate License pages that can help explain licensing of the entire site's content. Moreover, with a new documentation site, we'll be able to bring in more community contributors to the project hosted on GitHub.
Over the course of Run 2, from 2016 to 2018, the CMS detector produced an unparalleled amount of data, resulting in an intricate optimization problem in data access and storage infrastructure as well as distributed computing that is one of the fundamental challenges of running an experiment like the LHC. Dedicated physicists and engineers have constructed a system that has served the collaboration well, but the approaching LH-LHC upgrade, which will produce about an exabyte of data per year, demands a more economic solution. Fortunately, the HEP Software Foundation (HSF) has been collecting data describing global and local access patterns that can be used to model the response of alternative, novel infrastructures that may better serve High Energy Physics for decades. Furthermore, with the advent and ever-growing popularity of "Big Data" in industry, the optimization philosophy of the CMS data infrastructure, as well as the predictive power of the project itself, will have relevance far beyond experimental physics.
Most containers currently have a hard-coded default seccomp profile, that is pretty loose and meant to support a wide range of use-cases. The idea of this project is to build a tool that would watch all of the syscalls made within a container, and generate a seccomp profile for this specific container to further harden security. We would want to add a command to the Pod Manager (Podman) tool to basically launch the container and then collect a set of syscalls either through strace, or auditing, or similar tracing technologies.
This proposal aims to implement various missing features in Accounts.susi.ai, SUSI Skills and Chatbots and take a step towards Integrating SUSI skills with SUSI AI smart speaker. The features added in this proposal are :
In the wake of the research I have undertaken for my Master's degree's thesis on automatic detection and assessment of right-wing extremists' online speech, I intend to leverage the insights gained thereby to help CLiPS refine and expand their resources in the field of hate speech. It will include, among other possible sub-tasks, the automatic collection of hate speech textual data, the definition of formal annotation guidelines for such data building on existing literature such as "Hate Speech Dataset from a White Supremacy Forum" (De Gibert et al., 2018), the fine-grained annotation of the data for ulterior automatic processing, investigate the best automatic assessment methods for the data (both in terms of performance and explainability), study the opportunities and limitations (from a technical, ethical and practical perspective) of said automatic processing, transpose the results to develop hate speech assessment tools for other languages such as French or German.
This will be a game engine for the Pocket PC game Hyperspace Delivery Boy. It consists of a set of Lua extensions, which will be based on the Lua code that was introduced in the SWORD25 engine.
I think the most important situation in the creation of the snippets is the control of the syntax required. For this reason, we need to specify the required options and spelling in the syslog configuration. In particular, standard log files and filter stage must be included.
In Syntax highlighting, the keyword control will facilitate the writing of configuration. (options, source, destination, filter, parser, rewrite, template, template-function, log, junction, channel, block)
In Syslog configuration, file path defining like this; {xxx ("…"); }; therefore, we should offer automatic completion and minimize the error.
We have to check that this order is in order, we should give a warning in case of faulty.
For example, if I need to create a snippet for syslog-ng;
{ "Create-Log-Block": { "prefix": "log", "body": [ "log {", "\tsource ();", "\tparser ();", "\tfilter ();", "\tdestination ()", "};" ], "description": "Log Block Example" } }
Liquid Galaxy is a interactive environment with many processing nodes and screens. This proposal aims to create a new JavaScript library to acquire data from sensors to be displayed in Liquid Galaxy, making the connection between the installation and the said sensors standardized. The library aims to facilitate the development and the deployment of new ways for displaying information in Liquid galaxy. The library will also support the creation of mock data for installation where the sensors are not yet available.
Over the last two years, I have used TensorFlow in every project related to deep reinforcement learning. But when I decided to work on data efficient reinforcement learning, I found it is necessary to handle the model uncertainty by using probabilistic models like Gaussian processes (GP)} and Bayesian neural network (BNN). While TensoFlow probability provide support for the Bayesian neural networks, I found the Gaussian process model in the distribution API somehow poorly introduced, in comparison with other Gaussian processes frameworks. As a junior researcher works on robot learning, I believe developing such models will help us reduce the training time for our agent, and in that way we are making a step forward towards the Artificial General Intelligence (AGI) which can learn and adapt as fast as possible. From another side, as a TensorFlow user, I will be convenient if I can develop algorithms which use a mix of the ordinary deep neural networks and deep Gaussian processes all in one place as parts of TensorFlow. And it would be If I got a chance to develop that by my own hands with a help from TensorFlow itself over the next summer.
The project requires implementing 2 new objective loss functions - one for survival loss and another is for binomial loss. Survival loss includes accelerated failure time model with left, right and interval censored outputs and of time-to-event following gaussian and logistic distribution. The binomial loss includes constraints over a number of trials. This requires calculating gradient , hessian and loss metric for each loss functions and create new changes in data structure to implement it correctly.
‘ftinspect’ is an essential part of FreeType to show how a font gets rendered by FreeType, allowing control over virtually all rendering parameters. The idea is to integrate each demo tools into ‘ftinspect’, based on the Qt GUI toolkit. Currently, it only provides the limited functionality of ‘ftgrid’. Thus, includes finishing ‘ftinspect’ handling all the aspects of ‘ftgrid’ and other demo tools.
Adding end-to-end encryption to libqmatrixclient for future support in Qt/libqmatrixclient-based client like Quaternion.
TensorFlow is one of the most popular machine learning frameworks and is widely used in fields beyond machine learning and data science. The architecture of TensorFlow has been elegantly designed such that it is possible to be extended in big data, medical imaging, and physical sciences. Supporting different format of data is the necessary step for communities beyond machine learning to adopt TensorFlow, as data is always as the entry point or edge node of the TensorFlow’s graph. Importing data with different formats natively in TensorFlow allows users to build their systems or applications without the need of additional conversion infrastructure.
TensorFlow I/O focuses on providing various data format supports for TensorFlow, and many data formats are already supported, like Apache Kafka stream-processing, Amazon Kinesis data streams and also LMDB format and MNIST format, etc. However, the generic JSON format hasn’t been supported yet. It is quite necessary to support JSON format since JSON files are widely used in machine learning and data science.
I will be working on providing JSON support in the TensorFlow I/O so that it will be possible to read JSON files into Tensorflow.
Currently, there is no tool available to integrate into a website that has the capacity to query through multiple layers of linguistic annotations like morphology, syntax and semantics annotations. Also, the data has to be queried from RDF format which is flexible and will help integrate into a linked open data toolbox for computation. A tool named ANNIS already exists to query through multiple layers of annotations but it has some limitations which have to be overcome. Also, the tool must have a query language which must be easy to use for non-professionals. It must also highlight the results in the retrieved document which resulted in the retrieval of the document. The tool must also return data in a format that allows for easy integration into any website.
Enable full application of Time-Frequency Analysis tools on Source Estimate M/EEG neurophysiological data by integrating mne.SourceEstimate objects with mne.time_frequency.tfr functions.
Automated C/C++ header generation from D files
Elichika and ch2o are python to ONNX experimental compilers for Chainer ML framework (Elichika would eventually replace ch2o). Given a Chainer model, it parses the python source to get Abstract Syntax Tree and uses it to generate ML framework independent ONNX graphs. Currently neither of the compilers support parsing python jump statements. Moreover, several Chainer functions supported by ch2o are currently unavailable in Elichika. The objective of this project is to extend support for parsing break, continue and pass statements in both Elichika and ch2o. Additionally, it also involves adding Elichika support for Chainer functions and links existing in ch2o but not in Elichika such as sigmoid, max_pooling_2d, BatchNormalization, etc.
ASupport of all Bricks for Catrobat Language version 0.992 (except physics bricks and extensions like Lego)
This project proposes to add the support for APE14 in NDCube, which is a SunPy-affiliated project. In order to support both FITS-WCS and gWCS and hence support more future WCSs libraries, this project aims to convert the ndcube package to use a common WCS API. The new API has already been outlined by astropy’s APE14. Implementing support for APE14 will enable ndcube to use FITS-WCS and gWCS independently and hence increase the power and scope of the ndcube package. With this new feature, NDCube will become better placed to serve a wider array of n-dimensional data analysis needs from multiple astronomical communities.
Improve the infrastructure underpinning Firefox's in-tree documentation.
Currently the Public Transport Assistant has been able to achieve a lot of success in validating the route data in OpenStreetMap but there are still many features that mappers would like the plugin to do, so that it can act as a true assistant.One of the example is BicycleRoutinghelper Right now PT-Assistant is only applicable to public transports but it would be a really good to extention to other mode of transportations. Different areas of routes and their presentation on the editor would also be enhanced during the summer.
One of the main ingredients in DOLFIN’s native support of parallel computations is the mesh partitioner. The mesh partitioner seeks to ensure a load balance among processing elements and reduce the number of shared elements on partition boundaries, in order to minimize the communication overhead. Currently, the users can pick from SCOTCH or ParMETIS partitioners, neither of which are actively maintained. The KaHIP partitioner is an alternative that is being actively maintained and promises more scalable and higher quality results than the state-of-the-art partitioners such as ParMetis or PT-Scotch. The purpose of this project is to add the KaHIP partitioner to DOLFIN's graph wrappers and mesh partitioning and investigate whether the promised improvements will reflect on the dolfin's parallel toolchain.
Refining four Turkic MTs: uig-tur, kyr-tur, uzb-tur and tat-tur
The aim of this project is to implement the architecture that enables the support for nested - languages by default. The users of coala would not have to concentrate on writing new bears/ analysis routines. This project would perfectly work with the existing bears.
On the successful completion of project, coala would have the support for the following programming languages:
It is important to note that the aim of the project is not to provide a full fledged support for all nested languages , but to lay the foundation of architecture. It’s not possible to provide the full fledged support for languages until coala has a new feature where the bears can accept AST’s and lint through them, instead of taking files as input.
DroidBot is a lightweight automated testing tool for Android apps. It can send random or scripted input events to an Android app, use Breath First Search or Depth First Search strategy to iterate through the app’s activities, and generate a UI transition graph(UTG) after testing. It is compatible with most Android apps and able to run on almost all Android-based systems.
However, it doesn’t support OpenGL-based games and some hybrid apps whose GUI are not all Android standardized UI components. In those apps, interactable components are designed in a way that can be easily recognized by a human. The UI design of a great number of interactable components is based on some universally accepted rules. For these reasons, we believe we can use Computer Vision and Machine Learning techniques to detect the position and type of UI components on given app screenshots. Once we extract the UI structure of a given screenshot, we can feed it to the other modules of DroidBot to finish the whole testing process.
In summary, the goal of this project is to extend the application range of DroidBot to game testing with Computer Vision and Machine Learning techniques.
Julia language offers support for differentiable programming with the help of its AD. This projects leverages this to test differentiable programming (DP) approach for autonomous vehicle research. Goal of the project is to create a duckietown environment in julia and have it trained with DP algorithms on various maps.
Starlark language was designed for Bazel. The language has now a specification, 3 implementations (in Java, in Go, in Rust), and is used outside Bazel.
The goal of this project is to ensure the different implementations are in sync. This includes creating a common test suite, identifying and resolving corner-cases, suggesting changes to the language specification. It would be useful to compare the performance of the implementations (to find performance issues) and improve the interpreter in Java.
Rekognition is an Amazon service capable of identifying objects, text and activities, performing facial analysis and recognition, detecting the frequency of objects or an inappropriate scene, and much more using deep learning. This GSoC proposal aims to build a self-contained module to start off Poor Man’s Rekognition — an open-source version of the commercial service. This initial module, which will be scalable and robust, will have this as the main goal: “Given a set of properly tagged people (suppose for example, a number of celebrities), create an API that can be used to identify such people in other images.” The aim by the end of GSoC is to complete a pipeline that outputs a timeline of scenes in a video, and the different actors in each scene.
Optimization of complex systems is one of the biggest challenges in engineering. A system can be considered complex when its consisted of more sub-systems and when nonlinear interactions between its various design parameters and sub-systems are present. Optimizing such systems can be a challenge for human designers who can neither directly identify optimal design points nor manualy investigate a large design space to identify optimal solutions. In order to tackle this problem, algorithmic optimization approaches are required. Genetic algorithms can successfully navigate a large and nonlinear design space but their performance is highly dependent on the selection of various hyper-parameters. At its current form the optimization problem is concerned with the design of an electric propulsion unit for small satellites. This project aims in the development of visualization tools that will aid in better understanding such complex optimization processes. By developing visualization tools for the optimization process and the sub-systems' characteristics the algorithm's performance can be further enhanced and new insights that can prove valuable for human designers can be identified.
Amaya is a princess that have his kingdom invade and now needs to defeat The Tyrant to freedom his people. The idea is inspired in Hollow Knight game and consists in run, jump and use magic skills to defeat the enemies and the final boss.
This idea can be used for the "Coding Tutorial Game" too.
In this project we intend to integrate publicly available -omic and clinical datasets using natural language processing techniques. Combining genomics data with physiologic read-outs may be effective in creating robust machine learning and data analysis pipeline. The example microarray gene expression data can be downloaded from GEO (https://www.synapse.org/#!Synapse:syn5612563) and physiologic data from eICU (https://eicu-crd.mit.edu/). The idea is to map phenotypic terms to causal genes (for sepsis) and follow the SIRS timeline to form the integrated data set. After that robust machine learning models can be formed using the integrated data and compared with already existing models.
JuxtaPiton is an architecture being developed at Princeton’s Parallel Group for heterogeneous ISA research. For this project, I will replace the existing PICORV32 core with the open-source ao486 core to have x86 ISA support along with the OpenSPARC T1’s SPARC V9. This kind of a heterogeneous system enables reuse of a lot of legacy x86 code. We also try to interface the L1 cache level of ao486 with the L1.5 cache subsystem of OpenPiton. This allows us to harness Piton’s P-Mesh subsystem which maintains cache coherency across both the cores.
Integration of Mapbox with React, making the feed completely new after integration. Additions to data-structure along the process. Tests for backend and frontend . Websockets and redis addition and modification , with the much needed edits and additions added .
Integrate a testing and mocking framework to LibreMesh and provide the functionality needed to easily write new tests for actual or new code. Add tests for the core functions of LibreMesh.
LibreMesh as an embedded Operating System usually depends a lot on the underlying hardware. But, there are some parts of the code that don’t have that dependency, neither they depend on the network, or any particular state that the device could have. Also, there are many other cases were the states that one would like to achieve in order to reproduce a situation are complex or impractical. Unit testing the LibreMesh codebase will greatly help on approaching this two situations, and help having a much more robust solution for the communities it serves. Having automated unit testing integration test may improve the quality, the development speed, and shorten the release cycles of the LibreMesh software. Also, having tests that safeguard the core functionality may allow new developers to engage with the codebase with more confidence. For reviewers, it is also easier to understand and maintain code that has unit tests.
VISual MAth (visma) is an equation solver and visualiser, which can be used to solve complex mathematical equations. It not only solves the equation but also displays step by step approach which is deployed in the solution. It is also capable of plotting 2D and 3D plots. My proposal is to refactor old simplify modules (to include built-in python functions) and adding new modules like Discrete Maths module, higher degree equation solvers, implementing Matrix Module in GUI, adding support for simultaneous equation solvers, adding integration & derivation modules etc. Also, I will be implementing "Equation Scanner" in VisMa as a side project. This would enable the user to input the equation to VisMa by providing an image of the equation. I will also be focussing on improving existing documentation and adding more documentation to the project.
Kubernetes Client is currently missing support for several types of Resources : ServiceCatalog TemplateInstance VolumeAttachment CertificateSigningRequest SelfSubjectAccessReview SelfSubjectRulesReview TokenReview ControllerRevision UserInfo AdmissionConfiguration
The following resources don't have any test coverage : Endpoints PersistentVolumeClaim PersistentVolumes SubjectAccessReview PodPreset HorizontalPodAutoscaler Namespace ResourceQuota
Create a Quarkus extension : Get the Kubernetes Client to work with Quarkus on native mode. Making the kubernetes-client SubstrateVM friendly : Substrate VM is a framework that allows compilation of Java programs into self-contained executables.
Blender has had a cloth simulator for quite a while now. It is based on a system that now needs major changes. A lot of research has been done in adaptive cloth simulation (which is the next big step towards being able to do realistic cloth simulation in reasonable computation time). By introducing adaptive cloth simulation into Blender, we can decrease the computation time per frame, thereby leading to a better quality of simulation in the same amount of time. Furthermore, it would then be possible to build really powerful cloth production pipelines. The current cloth production pipeline is slow and requires a lot of user interaction (for example, for adding the correct level of topology and redefining the stitches each time the topology changes). With the introduction of adaptive cloth simulation, the algorithm can automatically determine the necessary topology to get the correct collisions and realistic folds and wrinkles. Additionally, this project would act as the base for future improvements, such as adding contact friction and dynamic tearing of cloth.
D’s betterC mode is an important tool to be able to use D on bare-metal and embedded platforms. By disabling, for example the class support, the compiles does not need to as many runtime functions and types to be implemented for the code to compile and link. This makes the life of a bare-metal developer much easier.
But there is a catch! A lot of the language features runtime hooks, which in turn requires the TypeInfo class to be able to function. One way of solving both of these problems is by moving runtime hooks to use templates instead. This solves the betterC by removing the dependency of classes (the TypeInfo class), and it solves the safety issue because now the compiler will have all the information about the hook and it can verify it itself and not just trust that the runtime developer remembered to mark the hook correctly.
This proposal will work on translating all the array hooks from using the TypeInfo class to templates.
This project aims at implementing TDS (Tabular Data Stream) protocol of Microsoft SQL Server based on SPI of the reactive SQL Client for Eclipse Vert.x. It should be a reactive non-blocking client and provide the abilities to interact with MSSQL server including connection, authentication, query execution and SQL data types encoding and parsing.
In the view of LHC Run 3, we want to extend the functionalities of Molr so that it will be ready to use in production to control various operational systems.
Features to be implemented:
Design of an Apple Watch application that replicates the most basic features of the iOS mobile client.
Building an Android Downloader to be embedded in Firefox Lite with the following functionalities-
-Support Pause, Resume and Cancel functionality.
The underlying objective of the project is to increase the Ease of Access to Technology and Optimum Utilization of Resources for all those not living in the most developed parts of the world.
A recent addition to the local statistical models in PySAL is the implementation of Multiscale Geographically Weighted Regression (MGWR) model, a multiscale extension to the widely used approach for modeling process spatial heterogeneity - Geographically Weighted Regression (GWR). The GWR model in PySAL can currently estimate Gaussian, Poisson and Logistic models though the MGWR model is currently limited to only Gaussian models. This project aims to expand the MGWR model to nonlinear local spatial regression modeling techniques where the response outcomes may be discrete (following a Poisson distribution) or binary (Logistic models). Subsequently, to support efficient testing for different model implementations, a simulated data generator module will be implemented to supply test datasets following unique model variable distribution needs. This will also provide a foundation for possible expansion to test other local model implementations in PySAL. Additionally, since the functionality to predict the dependent variable at unsampled locations is not supported for the MGWR model in PySAL, this project also aims to enable predictions for MGWR.
The bevel modifier is extremely powerful, but there is constantly a list of requested improvements that could expand its use case and speed up the modeling process. One of these is user-drawn profiles, which is a commonly requested feature that is somewhat separate from the main functionality of the bevel operation, which makes it a good candidate for a GSoC project. There has been successful GSoC projects with the Bevel modifier in the past few years, and I hope with this project I can continue that success.
This project will provide initial infrastructure to make dav1d capable of offloading some work to the GPU using shaders and APIs such as OpenGL, Vulkan, Metal and DirectX. I will write a shader for at least one of the decoding stages (more if time permits).
Collision detection is essential in a game engine. It is the reason you go bonkers playing Flappy Bird. However, it would be difficult and expensive to represent each object using its exact geometry, so a better idea would be to put those objects in bounding volumes or CollisionSolids. These CollisionSolids are mathematically defined, so by using "some math", a collision system would be able to detect their intersections with each other.
Currently, Panda3D is missing 3 collision tests: parabola into box, parabola into inverse sphere, box into capsule. Here are some use cases that we've probably seen before:
1) An Angry Bird hitting a box (parabola into box)
2) Protecting the audience from a football in a spherical stadium (parabola into inverse sphere)
3) Some box object hitting a player (box into capsule)
My first goal would be to add these collision tests.
My second goal would be to add another CollisionSolid named CollisionHeightfield. The idea is that we can represent heightfields using a grayscale image, with lighter (taller) and darker (lower) pixels. We can use this concept to efficiently deal with collisions in uneven terrain.
The project consists of multiple pieces that incrementally add to the strength of the development and testing infrastructure.
To rework the VLC interface heavily and make it beautiful and useful again.
I'll be working on redesigning the new interface(VLC 4.0) to make it feature compliant with the old interface.
This includes making Qt models to transfer data from/to the different VLC components. The data in the models will be accessible via a UI which will be created using qml.
More specifically i'll be working on:
Hello everyone, My proposal is based on the Android Application for Agora Vote. I have followed the template provided by the organisation and believe you would like it. Please share your time to review this help me to improve it.
WiFi services most often make use of a login page (a.k.a captive page) that is used to allow users to authenticate, sign up and know more about the WiFi service they are using.In this project idea we want to make a login page that is configured and built using react and integrate the result in the official OpenWISP toolset.
bpftrace is a high-level tracing language for efficient kernel tracing using eBPF (extended Berkeley Packet Filters). This project creates a new PMDA (Performance Metrics Domain Agent) for PCP which runs arbitrary bpftrace scripts and stores the output as PCP metrics, and a new Vector widget which visualizes these collected metrics in a live heat map or table. Furthermore, the Vector widget will include a bpftrace query builder for the rapid creation of bpftrace scripts.
I have been using JOSM to add cartographic data about my university. My university has a lot of buildings with complex shapes that can be difficult to bound using nodes. So, my GSoC project will involve the following:
The tool will help save a lot of time for OSM contributors as it will make adding new buildings easier and faster.
Now, we have skunkworks-crow, which is the GSoC 2018 project. In this year, we need to improve the project and develop a new strategy to address the highest-value limiting factors to broaden the base, and propose a strategy to validate behavior on devices we may not have access to.
Increase test coverage. Improve CI integration and pipelining. Upgrade Rails framework version
This project will focus on implementing a JavaScript library to parse, validate and create SPDX documents. This library will implement an SPDX tag/value and RDF parser, validator and handler in JavaScript.
MapMint4ME (MM4ME) is an android application allows user to take photos, record their positions, and view their current location on map based on configuration settings of their MapMint server. The Application stores data in the absence of internet connectivity and uploads recorded data to the server when it's online. The aim of the project is to add Augmented Reality (AR) support in MapMint4ME and add features like adding markers while capturing images, drawing shapes on scenes, calculating the distance between points, calculating the area of an object in the frame and geotagging captured data.
Macports currently uses a legacy version of Buildbot (0.8) as its continuous integration framework and hasn’t upgraded due to certain drawbacks in the Waterfall view of the newer versions. However, the currently deployed version is outdated and fails in several aspects due to absence of some key features such as:
This has led to some major setbacks with respect to developer productivity. Macports also needs some custom views in buildbot to be able to better analyse build history, commits, etc. The legacy version doesn’t allow us to write such custom views. This project will involve upgrading the Macports Buildbot infrastructure to the latest version, developing a plugin for buildbot and writing custom views.
Index Checker warns the user about code that can throw an IndexOutOfBoundsException. In this case study, four open source libraries will be annotated using Index Checker. Some improvements to Checker Framework JDK annotations are expected. The goal is to find possible inconsistencies or bugs in the annotated libraries or in the checker itself.
Anaphora resolution is the problem of resolving references to earlier items in the discourse. This most commonly appears as pronoun resolution where we need to identify the antecedent in the source context. Apertium works with resource-poor languages and the information available isn’t as linguistically complex as parse trees. Hence there is a need for a tool which resolves anaphora using simple linguistic information.
Instead of the current system, which chooses a default male for pronoun resolution, this tool will use linguistic features to assign saliency scores to the possible antecedents. The highest scored antecedent is picked for possessive, reflexive, zero pronouns and long-distance relations like agreement in adjectives. This formalism is language agnostic and the features make use of only POS tags and basic gender and number information. I will test it on Spanish, Catalan, English, Russian, French, etc.
When implemented, this tool will increase the fluency and intelligibility of the Apertium Translation Output of any pair it is used with. It has several interesting future prospects, such as using language specific linguistic features, and general coreference resolution.
Ecosystem issues such as improvements to tutorials and examples are an important milestone on the road map to JuMP 1.0. This project aims to improve the current JuMP ecosystem through the development of a JuMPTutorials.jl package, automating testing for new and old examples, updating old JuMP v0.18 code to v0.19, and standardizing naming and structuring of examples to enforce style guidelines.
Integrated script editor in Godot lacks features found in editors like VS Code, Sublime, Emacs, etc. These editors are also more popular among developers and therefore are better to switch for usability. However, to implement core functionalities such as - Diagnosis, Registering custom symbols, jump to definition, etc a Language Server structure has to be adopted. For example, the client (say vs code) will communicate the godot language server to give desired result.
Microsoft's Language Server Protocol (LSP) is flexible and powerful to implement these functionalities. It also supports many editors - VS Code, Atom, Sublime, etc. Hence, instead of writing complete extensions for each editor (client) using LSP servers can be reused while the client which is an editor extension has to be only re-written.
Hadrian seeks to replace GHC’s current Make-based build system some time around GHC 8.8, but since the main goal of Hadrian so far has been to achieve feature parity with the old build system, there hasn’t been as much of a focus on speed, which means there are likely many optimisations that would significantly and appreciatively increase performance. Right now the main problem is a lack of parallelism, with the primary bottlenecks being configure and compiling stage 0. This project aims to identify the causes of the bottleneck in stage 0, as well as looking for other sources of slowness in Hadrian and Shake using profiling, and deal with them to improve Hadrian’s performance as much as possible.
The Cuneiform Digital Library Initiative (CDLI) has a rich collection of information for over 334,000 Assyriological artifacts. CDLI maintains a database where it records each artifacts period, place of origin and tablet writings among other data points. The temporal, geographic and textual data present for the artifact can be combined to render informative visualizations which would assist Assyriologists around the world in their research.
I propose a new approach for detecting show boundaries in videos by automating some of the process which were previously done manually and a method equivalent to Binary Search, using which we can accurately find boundaries in very few steps. The new approach not only works on shows for which we have available manual annotations, but also finds accurate boundary intervals for shows which we have never seen before.
strace currently adds significant overhead to any application it traces. Even when users are interested in a handful of syscalls, strace will intercept all syscall made by the observed processes, involving several context switches per syscall.
Since Linux 3.5, userspace applications can rely on seccomp-bpf to filter the syscalls they want to trace. In that case, the set of monitored syscalls is filtered in the kernel, using cBPF, before any context switch to userspace. strace could leverage seccomp-bpf to avoid tracing syscalls users don't want. The tracing landscape of Linux also drastically evolved in recent years. In particular, user applications can rely on eBPF programs to filter and aggregate data of interest in the kernel, with low overhead.
During this Google Summer of Code, I will finish and merge the works started to 1) rely on seccomp-bpf to filter syscalls in kernel space and 2) allow strace to use alternative backends. That second work will come with a tracepoint/BPF proof of concept to ensure strace supports diverse backends, beyond the usual ptrace model.
The aim of the project is to dive into the internals of Shogun, refactor and clean old code, and apply modern C++ principles. This includes:
Aim of this project is to develop a People Identification System with the following two capabilities.
This component will allow for a more enriched interaction between robots and humans by enabling robots to make better decisions based on the input from different categories of people.
This project aims to develop user interactivity on the website through a full-fledged notification system and a platform to record user feedback. The notification system will be used to notify users associated with events and by the community to convey information to their audience i.e. the users, and the user feedback system will be used to determine the project quality/popularity amongst the developers.
Test new features that will help develop the application and work on all versions of the software.
Smooth Driving Application
The aim of this project is to integrate seamless support for the WOFF File Format 2.0 into FreeType so that these fonts can be recognized, decompressed, and loaded as any other SFNT font. This includes study and comparison of how SFNT fonts are wrapped into WOFF 1.0 and WOFF 2.0 files, exploration of existing libraries for WOFF 2.0 and Brotli compression, and finally, writing code to allow FreeType to handle WOFF 2.0 fonts.
A engine to simplify Dynamic Partial Order Reduction in JPF as well as a tool to efficiently prove or disprove data race freedom in structured parallel programs that generalizes over input.
With Red Hen Lab’s Rapid Annotator we try to enable researchers worldwide to annotate large chunks of data in a very short period of time with least effort possible and try to get started with minimal training.
This Project is aimed at extending the Red Hen Rapid Annotator, which was re-implemented from scratch as a Python/Flask application during last year's GSoC. This project mainly aims to deliver a fully usable and handy product by the end of Google Code of Summer and incorporation of new feature request and bug fixes. The final product would be a complete tool for fast and simple classification of datasets and an administrative interface for the experimenters where they can conduct their annotation runs. It broadly comprises of 3 steps, namely
This project will involve:
Today, Gemstash is only able to store private gems directly on disk. That's fine for relatively small setups but becomes a problem when users want to run multiple web servers for redundancy, or need to store more gems than easily fit on a small server's hard disk.
This project would include extending Gemstash to support storing gems in other places. At a minimum, that would mean building a system for multiple backends and implementing the existing local disk storage as one of those backends. Ideally, it would also include implementing a backend for storing private gems in S3, to demonstrate that the backends system works for different kinds of backends.
While making medium to large scale games in Godot, many small bugs start to creep in that cannot be caught by the compiler. These can only be dealt with manually while debugging. This project will build a tool to be used semi-regularly to highlight these problematic pieces of code in an automated fashion. This essentially extends the scope of static checks , currently just being within each script, to operate across scripts and scenes.
VLC currently uses DXVA2 (DirectX), VA-API (Intel-focused) and VDPAU (Nvidia-focused) for GPU accelerated decoding. Nvidia’s NVDEC is the proprietary successor to VDPAU, as it supports newer chipsets and codecs. The goal of this project is to add NVDEC support in VLC media player.
Graphite is often used with Grafana to visualize metrics. When a graph for a certain metric is set up in Grafana and some thresholds are specified, users would like to be able to create a corresponding trigger for this metric in Moira. This project aims to create a Grafana plugin to create Moira triggers with the least hassle from the plugin itself.
The Performance Farm is an useful way to test Postgres' functionalities while changes are being made, to analyze its efficacy on different operating systems. To further support the community effort, this project has to be extended building a database and a website on top of it, to make results easier to browse and display.
My goal is making a web application using Python to interface server and client. The code will be scalable, portable and light-weight, while providing users a functional interface to interact with the performance data.
The website will rely on a database, with optimised structure and queries to guarantee speed and an efficient use of the resources. The Django framework will be deployed so that the browser can send requests of search, review and storage.
The application will also take care of parsing, users handling and securing connections, implementing RESTful API and respecting web standards.
All changes will be subject to testing and bug fixes, to have a complete and coherent project with a clear documentation so that the final product is easy to set up and maintain.
We want to execute real-time applications using a RTOS on typical heterogeneous embedded systems and compare the results with APP4MC. In order to achieve this goal we intend to extend the well-known POLARSYS and APP4MC Rovers. This new revision will include a significant increase of processing power based on an heterogeneous computing platform typically encountered in the automotive domain. Therefore, we will integrate the Nvidia Jetson TX2 Module into the new rover. All deliverables, i.e. guides on how to reconstruct this new revision of the rover as well as the new application along with its documentation will be published open source.
The project aims to improve and maintain at least 25 sugar activities.
Aim of this project is to create an application that works like a journal using Blockchain technology and Smart Contracts. The application will use Hyperledger in order to track various research objects rather than just a simple manuscript.
The Data Retriever is a package manager for data. It downloads, cleans, and stores publicly available data, so that analysts spend less time cleaning and managing data, and more time analyzing it. The automation of this process reduces the time for a user to get most large datasets up and running by hours, and in some cases days.
Currently, it is hard to reproduce previous installations of a dataset using Data Retriever due to updates in the dataset and Data Retriever itself. This project aims to add provenance capabilities to Data Retriever so that it becomes easier to reproduce previous installations of a dataset at a later date.
I want to write code to translate the syntax supported by Query.jl into SQL. Then, the queries can be sent to a variety of database softwares. I can take inspiration from implementations in other programming languages (notably LINQ). This work will benefit any user who works with tabular data in Julia. The process to translate code to work optimally with columns might overlap with the process of translating code into SQL. If I am successful translating to SQL code, I will start working on column-based optimizations afterwards
Distributional ecology is a growing field of science dedicated to characterize species distributions based on their ecological niches. Based on early work from Joseph Grinnell and G. Evelyn Hutchinson most of tools in this field consider the environmental characteristics where species are found to model their niches via correlative approaches. Currently, this methods are used widely and their applications include disease risk mapping, climate change risk predictions, conservation biology, among others. Physiological data suggests that Grinnellian niches are convex in nature and they may probably have an ellipsoidal form when multiple dimensions are considered. However, among the available software in the field, algorithms to model ecological niches as ellipsoids in the environmental space are scarce. Several analyses, not currently available, can be performed assuming ellipsoidal niches, especially if recent literature is considered. This project aims to develop an R package of specialized tools to perform multiple analyses of ecological niches using ellipsoids. A broad community of researchers and students will find this open source tools useful in performing their analyses.
The aim of the project is to improve the current implementation of notebookbar. The task is majorly divided into two parts creating the basic extension support and creating the customization feature for all notebookbar configurations. Beyond that, the project intends to solve theming issues.
The purpose of the project is the implementation of an online Greek mail dictation system. In practice, the user will dictate the mail that wants to send and the speech will be converted into text. The system's performance will be improved through the training of a personalized acoustic and language model. Extra features will be supported, such as special dictation commands and replay of the final email for verification.
Bindaas acts as a unified interface to various data sources like Apache Drill, MySQL and MongoDB. The current implementation uses api keys for user authentication. I have proposed the use of JSON Web Tokens for a more scalable, secure and faster architecture for authentication.
Work needs to be done to check whether Prow can be replaced by Github actions, getting metrics without any gaps and other low hanging fruit labeled issues.
Creation of a component capable of understanding the natural language of speech input sentences (Intent classification, Entity extraction). Implement an API provided by this system that will be used and will have to be adapted for their use in the RoboComp cognitive robotic architecture. Evaluate the quality of the semantic representation of knowledge, for the development of future dialogue systems between humans and robots.
I plan to work on creating a Python API, by creating a wrapper using SWIG on C++ libraries, to increase the translation speed. Right now, the project is calling Apertium binaries as subprocess, which causes unnecessary over heads, that can be reduced by creating a wrapper on C++ libraries. The Project will be cross platform, available on PyPI and can be used from Jupyter Notebooks. The ease of access should increase the Apertium's User Base.
Mind The Word is essentially a very reliable browser based extension, which helps the users to learn and practice languages. This project aims to implement the following functionalities :
OpenWISP installation procedure has some limitations that make it unsuited for complex deployments that need horizontal scaling, custom setups and easily replicable deployments. This project aims to solve these problems by dockerizing the OpenWISP modules to give users plug-and-play images with all the supporting "batteries-included" services so that the user can get OpenWISP working on their servers for their organizations simply by changing the environment variables or files in volumes.
The project will increase Image Sequencer capability while simultaneously demonstrating the ability to process satellite images. The general approach is to develop Image Sequencer as a user interface for openCV.js and openCV.js as computer vision processing engine for Image Sequencer. Technical objectives include: 1) streamlining satellite processing capability via image sequencer functions, 2) enabling/extending opencv applications for Image Sequencer and 3) demonstrate daily satellite environmental analysis over a 3 month period.
There are thousands of wheather satellites orbiting the Earth at each moment of time, constantly transmitting APT signals. The problem of efficient processing and understanding this data is of ultimate importance. Current software provides possibility for decoding APT signals received by RTL-SDR receivers, but the task of understanding the extracted data is hard, because an open-source programs for proper visualization of these images do not yet exist. This project aims to create an open-source software to target that issue, so more developers and enthusiasts will receive an efficient tool to help them in their research. The software would provide functionality for creating different types of interactive visualizations of satellite weather images, along with their georeferencing.
Java and UML (Unified Modeling Language) are very strongly connected. As for the Java programming itself there are countless software and editors which can make the work of a Java programmer easier. On the other hand there aren't any similar software which can be useful to a UML disigner since almost each drawing editor for designing UML diagrams is difficult to handle and requires to manupulate lines and shapes by hand. For that reason UMLGraphs is an important and useful tool not only for the UML designer but also for the Java programmer and fully functional software to automatically generate via coding commands class diagrams and sequence diagrams. UMLGraphs has been developed using com.sun.javadoc API and now the next level to achieve is to make it be fully functional using the latest jdk.javadoc.doclet doclet API , which is the main goal of this project along with the support for Java features such as Lambdas and Generics, unit tests and integration tests.
The following functionality is aimed at:
Choice of Model (lattice dimensionality, Boson/Fermion, Hubbard, XXZ, Boundary conditions etc.).
Non-interacting/Mean Field Models:
Band Structure and Eigen-state calculations. (Canonical diagonalization of Fermions/Bosons). Ability to take band structures as inputs directly for subsequent calculations.
Visualization of iso-energy surfaces in the band-structure.
Berry Curvature (From dynamics/Gauge potential).
Chern numbers.
Interacting models:
Exact diagonalization.
Ground state and Entanglement Spectrum calculation with DMRG.
Dynamics with td-DMRG.
Calculation of Conductivity
I propose to implement the following ideas in the project:-
Super resolution is the process of up-scaling and improving the details of an image. Currently the super resolution modules within OpenCV are based on methods such as robust regularization and optic flow estimation, while the current state-of-the-art methods are based on deep learning. I propose to add learning-based super resolution methods to OpenCV. This will allow for more accurate and faster (real-time) super resolution.
Code Slang is a javascript library that is flexible, intuitive and human; whose syntax resembles natural language more than programming language; that transforms programming into a fun conversation with the computer rather than a rigid set of logical commands.
The goal of the project is to write a driver for I2S Stereo Decoder - UDA1334A
Scanned sketches are raster images and raster images have some limitations related to image resizing to overcome that you need to use high-resolution raster which increases the file size. The only solution is converting an image to vector graphics or completely redraw it. After adding the vectorization feature to Synfig scanned drawings and raster images, can be easily converted into Vector Images which are resolution independent moreover the obtained vector graphic can be further edited in Synfig. This proposal is about developing an option for Synfig to perform vectorization of bitmap/raster images using an algorithm(s) of vectorization from OpenToonz.
Since Python 2.7 will retire in few months and no longer maintained.So it needs to be ported to python 3 and it must passed on both version of Python.The main difference that makes Python 3 better than Python 2.x is that the support for unicode is greatly improved in Python 3 .This will also be useful for scancode as scancode has users in more than 100 languages and it's easy to translate strings from unicode to other languages. Therefore the goal of this project is to make scancode-toolkit installable on Python 3.6.x + just as it install with python 2.7.
TrackPal is a public based mobile application which will be implemented using react-native. And the components that are in operation with this application, will be created based on the Go-social components. By using this mobile application, users can share their locations when they are in a bus or train and that shared location will be update when it moves and other users can see the shared details. This will solve the problem with users has, so that they can find the bus's or train's current location by just tapping on a route or train number in the screen.
At present the MapMint project is running on Python 2.x whose support will soon end in 2020 and MapMint is unable to take advantage of the improvements in Python 3.
I intend to port the MapMint source code to Python 3.x and reformat the code to improve the readability of the program and easy maintenance.
Future developments: Python documentation can be added to ease new users understandability.
My topic, Starred Topic features that have been long requested by community to save the topics they sent to or want to bookmark. Easy way to activate incoming webhook integration and zulip bots
Zulip-terminal (ZT) is a light and fast terminal client for Zulip. It targets a niche audience of programmers who primarily use a terminal based interface. This project focuses on improving Zulip’s terminal client to reach the quality of Zulip’s web-app. The major aim would be to introduce new features, fix up most of the high priority issues and refactor code, as well as improve and add to the current test suite.
I would like to write an export tool for the CDLI dataset so it can be used with the Scaife viewer. The tool will need to convert the native AFT markup used by CDLI to the Text Encoding Initiative (TEI) XML schema used by Perseus, and generate corresponding Canonical Text Services annotations for each source.
This will improve the accessibility of the CDLI corpus by making it available to the larger community of tools developed for TEI analysis and provide a newer, more powerful, option for accessing the CDLI corpus online.
CTLearn is a Python package for using deep learning to perform analysis tasks on data from imaging atmospheric Cherenkov telescopes (IACTs). These tasks may be either classification or regression problems. The sensitivity of IACTs is mostly driven by our capability of distinguishing gamma-ray induced events from cosmic-ray induced events. The better our deep learning models are in telling these two populations apart, the better our reach in the gamma-ray Universe will be. Therefore, optimizing our deep learning models has the potential to make a difference in our view of the Universe at these energies. This project aims to implement an automated model optimization framework in CTLearn, including random and grid search, Bayesian and genetic algorithms based optimization.
GPUSharing is an open source project which could share GPU by leveraging Kubernetes scheduling and Device Plugin extensibility.I would like to integrate it with kubeflow/arena.
In this proposal, I will describe my plan to improve Mio windows support. So far, I understand that Mio is a low level abstraction on top of the operating system’s evented I/O APIs. It is used by Tokio to integrate with the operating system and perform the I/O operations. Currently, Mio’s windows implementation is lack of edge-trigger mode support, and it can be improved by rewritten using the strategy used by wepoll.
To understand how this could be done, I’ve taken the steps listed in the Ideas List page and researched what I will need to do. I’ve already browsed the wepoll code and Mio code. I understand that edge-trigger mode can be simulated by re-enqueuing the node when WOULDBLOCK returns. I have also read through the information on how wepoll employ "base service provider" socket and AFD_POLL request to implement non-blocking epoll API.
Add support for Indexes on Expressions
The aim of the proposed project is to improve support for the Dhall language in mainstream editors via the Language Server Protocol standard.
Ensuring that two instances of buildroot running with the same configuration for the same device yields the same result. Reproducible builds means that if the same inputs are provided, you get the same output. Multiple builds with the same configuration (same input, same source) will result in the exact same build, even if they’re executed in different locations, different systems, and by different users. This allows multiple people/organizations (third parties) to verify the authenticity of the source code by reaching the “correct” result by building the software, as well as have consistent builds on a development system and an automatic build system.
The objective of my proposal is to expand the features of Exerciser activity. More precisely, the expected features:
My project is about porting, implementing and developing the features specifically, the Audio and Visual Feedback features of the Neurolab application (desktop version). Firstly, the project needs the development of the program modes in the Android application so that the user can select between the formers and get to launch the application and receive output/feedback accordingly. Then comes the development of the Audio and Visual Feedback features for the application. Now while developing these features it needs to get all the bases covered by coding all the possible classes (java files) and function modules for handling the background processes which can be achieved by taking some inspiration from the Neurophlex project. Animations need to be integrated and the app shall need UX/UI improvements for a better in-app experience. Details of the development process are further in my proposal.
Autograding is one of the greatest features which makes the instructor work more easier. It helps the instructor by grading score without reading the code. It basically compares the output file for a predefined input file with predefined output file given by the instructor.
Currently, instructors must write a configuration as a config.json (and any necessary additional files) and upload or store these files on the local file system. Automated grading can be improved by implementing the following features:-
We can reduce learning curve for new instructor by providing getting started tour feature.
Click boards are a flagship hardware product line of MikroElektronika with over 600 add-on boards ranging from wireless connectivity clicks to Human Machine Interface clicks for interfacing with peripheral sensors or transceivers. Most of the Click boards use the common protocols like SPI,I2C or UART to communicate with the Beaglebone and thus the support for them now is accomplished via device tree overlays via the bb.org-overlays repository. This requires /boot/uEnv.txt to be modified to load the drivers at boot, requiring at least one reboot to enable the support in a potentially error-prone way.
The Greybus Simulator is a tool which simulates an AP Bridge, SVC, and an arbitrary set of modules plugged into Greybus. Greybus already provides most of the interfaces used on click boards and utilises manifest files to enumerate hardware at run-time.This project aims to enable Click Board Support via Greybus Simulator by writing suitable Manifests according to the Click board Specifications and by simply copying the manifest to a hotplug-directory a click board can be loaded,which would make the interfacing a lot easier.
JavaSMT provides a common API layer in Java for accessing various SMT solvers with very little runtime overhead compared to using the solvers’ API directly. The ease of use provided by a unified API across these SMT solvers is a huge advantage. The API is optimized for performance and it is also customisable to the target solver. JavaSMT provides type-safety and can express formulas in the theory of integers, rationals, bitvectors, floating-points, and uninterpreted-functions, and supports model generation, interpolation, formula inspection and transformation. [7]
This project will extend the capability of JavaSMT by increasing the number of supported SMT solvers. The goal is to add 2 more to its growing list supported SMT solvers – OpenSMT and STP.
WikiEduDashboard is a versatile tool to engage in peer-learning through editing and reviewing the Wiki Articles. Currently, the tracking of articles is very limiting and this project intends to add Multi-Wiki Support which will make this tool even more flexible and gives the users the choice to choose which articles to track. Furthermore, parts of the dashboard will be internationalized so people speaking different languages can benefit from the dashboard.
The CircuitVerse site provides simulation for digital design circuits on the web. For large-scale projects, users need to use some circuits inside of other circuits without caring for the details of the sub-circuit itself dealing with it as a black box with inputs and outputs only. This project aims to add much more features allowing users to have more flexibility and options on how to display related information from sub-circuits/better display the sub-circuit itself. These features will open many new possibilities for users of the website.
Currently, the Google-Backend for GVfs supports only a subset of operations allowed by Google-Drive on the web-interface. The major problem with supporting all the operations is because of the difference in how POSIX systems handle files whereas how a database-backed system like Google-Drive handles files. This difference results into limitations as to what operations can be performed with the current libgdata API. Since, each file’s identifier is its ID, i.e. “name” equals ID, we have to specifically use “display-name” for storing a file’s title, which is what is shown by nautilus.
Simply copying/moving files from one folder to other folder results into an error “Operation not supported” so as to preserve the file’s title. Copying/Moving is one of the fundamental operations that should be possible on a file. My ultimate goal with this proposal is to add support for this necessity and make the Google-Drive backend more usable.
The GDPR defines pseudonymization and the processing of personal data in such a way that the data can no longer be attributed to a specific data subject without the use of additional information. Although the GDPR has been implemented since 2018 no reliable infrastructure exists in Greece to encrypt sensitive documents. Therefore, I propose the creation of a LibreOffice extension as well as a web GUI that will anonymize information in any legal document given. All sensitive information should be easily anonymized through this open-source tool.
PorfolioAnalytics is a popular R package designed to provide optimized solution and visualizations for portfolio allocating problems with complex constraints and objectives. In order to finish the optimization tasks, it contains many solvers from other R packages. In this summer, I want to expand this package by adding more optimization solvers to it. The target solvers include but not limited to “Soma”, “Quadprog”, “Rglpk”, “Mco”, “Parma”, and “Nloptr”. Also, I want to fix the existing bugs in “ROI” method. During this summer, I will read through official documents, implement all solvers and finish testing them. By the end of this summer, I hope that the users of PortfolioAnalytics will have more optimization method to fit their needs.
New Camel website is a major migration of http://camel.apache.org/ website. Their community have required as "help needed" to build and publish this website. So I would like to choose this project as my project for GSOC 2019. This work is focused on Camel 3 final release in September.
Search builder allows you to define your own search and arrange the criteria according to your specific needs. Search Builder also allows you to combine criteria with multiple AND and OR groups. Current Search Builder allows you to make combination of conditions to make multiple AND, OR conditions and return results. But multi-level nesting of AND, OR is not possible currently.
Project Aims to create a user friendly Search Builder, where user can query using complex conditions using Interface rather than writing query, and can get results. Currently Advanced Search is limited to certain features, search builder extends the limit of what can be searched easily.
The project currently is restricted to Contacts ( Get Only)
The “Icon” module provides the integration of icons throughout the Drupal and it makes the website look beautiful. The icons can be integrated into the Blocks, Menus Items, Fields, and content by using the Filters.
The development of GPUs for general purpose computing has revolutionized the field of deep learning. They are critical for training large deep neural networks (DNN) and improve the performance of inference.
The OpenCV’s DNN module has a blazing fast inference capability on CPUs when compared to other popular libraries such as TensorFlow or PyTorch. It supports inference on GPUs using OpenCL but not CUDA. NVIDIA’s GPUs support OpenCL, but their capabilities are limited by OpenCL. A separate CUDA backend is required to reap maximum performance out of NVIDIA's GPUs.
This project aims at adding a complete CUDA backend for OpenCV’s DNN module. By the end of the project, the DNN module should be capable of performing inference on CUDA enabled GPUs nearly as fast as or faster than existing deep learning frameworks such as TensorFlow or PyTorch.
Bug fixes and feature implementations for Annotatrix tool
rkt implements the App Container Executor specification of the appc Container Specification and uses systemd unit properties to implement its features. To implement the OCI runtime spec, systemd unit properties are not suitable since they differ from what the spec defines. The idea is to replace systemd unit properties by runc to implement the OCI runtime spec.
This project will focus on solving several of the high priority issues related to the general user experience of Zulip. This would involve improving the user interface in some areas, adding requested features in some places, as well as fixing some areas where the behavior is not as expected. All of this, in the hopes that it would improve the feel of Zulip for existing users and make it easier for new users to adopt.
Though it was not intentional, several of the issues to be tackled here are requests made by large communities as their major blockers to switching over to Zulip. This project consists of work that goes over a few areas, broadly, these goals could be clumped together as:
Refactoring, Testing and Bundling of the "capture" feature of the Spectral Workbench web app in a standalone Javascript Library with an API and an app around it written in React.
Implementation of payment gateways of Sofort Pay and AliPay using Stripe Sources. Implementing the admin overview of payment statuses for various roles in admin sales tab. Automating Payment reminders and integrating custom reminder templates with custom frequencies and related features. Limit account features of organizers depending on their platform invoice payment. Develop a feature which sends reminders to organizers pertaining to their invoice. Improve exception handling and error message customization. Implement security tests and automated reports to decrease the vulnerability against attacks. Maintaining compatibility between legacy version and the new version by writing scripts for migration of data. Implement missing server endpoints to check the presence of various components.
The transformation of textual data into time series variables, and their subsequent use in an econometric analysis is an important and emerging research area with its own specific issues. The extensions of the sentometrics package, which are described in this proposal, will help researchers to more easily tackle these issues when using the R programming language. Moreover, the current code base will be made more robust and compatible with other packages.
A common use case for regex matchers is to use them to query all series matching a set of label values, e.g. up{instance=~"foo|bar|baz"}. Grafana's template variables feature is a big user of that pattern. We could catch and split it into 3 different matchers, each selecting the three cases. This would make the templated queries produced by Grafana much faster. Postings is a lists of numbers which are references to series that contain a given label pair. They are used as a reference table to get the requested series. The project is to research and implement some compression.
Developing of the pipeline for crystallography analysis including GUI development
Add more functionalities like NumPy-like operators, lazy-evaluation, code generation, etc. to tensor module, especially to array module
This project aims to introduce a plugin mechanism to the Kubernetes Dashboard. It shall deal with defining the plugin framework architecture, it’s scope, how it could enhance the Dashboard UI and make it possible to utilize third party APIs to extend its functionality.
The attendance plugin provides the ability for teachers to display a QR code to allow students to take their own attendance, the QR code is currently static for the current session and does not change. This project aims to increase the security of the feature by implementing a process that frequently changes the displayed QR code and expires the old QR code, making it difficult for the QR code to be shared outside the session.
My project will focus on implementing topological sort, transitive closure and lengauer tarjan dominator tree.
Mifos Community App is the default web application built on top of the Apache Fineract platform. It is maintained by the Mifos Initiative as a reference solution for the financial inclusion community. It is a Single-Page App (SPA) written in web standard technologies like JavaScript, CSS and HTML5. It's also the starting point for any partners looking to customize or extend the UI. Due to limitations of Angular 1.x it is being rewritten as Mifos web-app which uses Angular 6 making it more future proof and opens up many opportunities to improve the user interface.
Even though chemistry has become a more data driven discipline in the recent years, the amount of data available for training deep learning models is limited when compared to their imaging counterparts (ImageNet, for example). Transfer learning is a strategy that aims to leverage representations learnt, by training deep learning models on larger datasets with available labels, and then using the trained network for fine-tuning on smaller, costly to label datasets.
This project is about porting a Transfer Learning Framework, ChemNet into DeepChem. This would involve reproducing the results of the paper and a blog-post, Jupyter notebook detailing its use. A ChemNet API will also be developed to allow any DeepChem TensorGraph model to use this framework.
Currently, the frontend of PMA is inconsistent and full of small issues. Its high time we switch from custom-written stylesheets to some more dependable and regularly updated framework. So this project aims to refactor the UI using Bootstrap 4.
BioJS is a library of over one hundred JavaScript components enabling users to visualise and process data using current web technologies. BioJS makes it easy for users to integrate their visualisations into their own website or web application.
The process of updating the BioJS standard to version 3.0 is going on. The Yeoman generator to upgrade BioJS components by wrapping them in a Web Component has already been created which enables users to embed and use visualisation of various BioJS components.
This project involves improving the Yeoman generator by adding new features and tests to make it easier for developers to implement old BioJS components as Web Components and to make new BioJS Web Components. It also involves clearly documenting the whole process of using the generator so as to make the task of working with the generator, making new components easy for developers as well as easily creating visualisations so that even non-technical persons can embed and use the visualisations in their work.
The project is based upon Flux Balance Analysis, which is a model used for simulating metabolism. It is used in genome-scale reconstructions of metabolic networks. This project aims at building a web-based pipeline which will help in analysis. The user will be able to submit an SBML model, a representation format based on XML helping in the biological process, select one of the analysis methods and in turn, get a result which will be produced after passing the input through the pipeline.
The pipeline will send the input to the COBRAPy microservice, which will return the result through node.js express routes and the results will be displayed via ReactJS frontend.
Opportunistic IPsec is an attempt to encrypt the internet at large. The idea is to build VPN tunnels directly to all internet hosts irrespective of the communication used. An initial proof of concept was created that leverages LetsEncrypt certificates for use with IKE and IPsec. The goal of this project is to turn this proof of concept into production quality code that makes it trivial to enrol and deploy on any server and any client.
The project has the following objectives:
The purpose of this project is to build a converter that can translate models built in Tensorflow, Keras, Pytorch, Mxnet, ONNX to mlpack's model format and vice-versa. Some other tools I would like to add would also display the required mlpack code to create the corresponding model.
With the advent of extremely deep neural architectures and their high training cost, transfer learning is the only way out. That said, the number of pre-trained models for mlpack is practically zero till now while more popular frameworks like Tensorflow has dozens of them.
Models trained in mlpack can be converted and used in Tensorflow for better benchmarking and feature-testing. Moreover, it can open up the field for mlpack and make it as popular as the other frameworks mentioned.
I am extremely excited to work on this project and am confident on getting it done because it resonates perfectly with my interests and with work I have already completed. I have outlined my approach in the proposal. Please reach out to me if I any elaboration is required for any specific portion, I will be more than happy to explain myself. Thank you.
Finite element methods require a discretization of a domain into small elements, called a mesh. Typically, users of DOLFIN use an external mesh generation package, such as gmsh to construct meshes, before reading them into DOLFIN. In this project, we will work to ensure that gmsh, DOLFIN, and our preferred visualization package, Paraview work seamlessly together. This will be a huge usability improvement for users working with very complex geometries.
Over the last couple of years, the Swift compiler has gained a new library called libSyntax. Its purpose is to represent the syntax of Swift source code with full fidelity (including white-space), enable structured editing and provide immutable, thread-safe data structures.
This paper proposes that the Swift parser fully embraces the new libSyntax library in its parser and stops emitting ASTs. This will allow more parts of the compiler pipeline to eventually leverage the capabilities of the new library.
The core of the proposal is a wide improvement of the LaTeX integration with JabRef, focusing on the end user, whose characteristics have been studied firstly. The goal is to make the work easier for our users.
I will develop several tools for analysing and checking LaTeX text documents: finding bibliographic entries (what are used, how many times, and where) and counting words, chars and citations by document; validating TEX files for solving issues; and importing entries from TEX files. Also, if it is enough time, I will add support for connecting to JabRef from external editors. This way, it would be possible to search a certain entry from TeXstudio.
Traditional patching methods require a lot of investigation, looking for the right patch, applying it, compiling, failing, and trying again. The Debian Patch Porting System aims to systematize the security patching process by incorporating the most efficient patching practices and heuristics, and to make it easy to backport patches in Debian packages. Given a CVE ID, a package name and the package version, the system should find patches for that CVE ID, apply the patch or patches into source and display the results.
Add EFS support as external workspace on AWS instances, and possibly add alibaba-credentials-plugin as another deliverables. Add support for JCasC of the plugin and monitoring of disk use.
Agora is a library of data structures and algorithm for counting votes in elections. It consist of over 40 algorithms along with testing frameworks. It is very clear this library aims at including a great number of vote counting algorithms and tries to carry out computations as fast as possible. Reason why more algorithms need to be added and taking into consideration speed of execution.
This project would extend the existing version and transform it into Version 4.0, by adding customer support, integration with external payment hub, support for TOTPs with google authenticator, improve outbound notification generator. This will also add better support for skinning, theming and white labelling of the app, add a report section, implement unit and integration testing, and Integrate with the corresponding APIs on the Mifos platform.
QEMU's TCG just-in-time compiler translates target CPU instructions into host CPU instructions so that programs written for other CPU architectures can be run on any host. Modern CPUs feature vector processing instructions, sometimes called Single Instruction Multiple Data (SIMD) instructions, which perform the same operation on multiple data elements at once. Intel's SSE and AVX instruction set extensions were introduced for x86 CPUs for this purpose.
The target/i386 front-end has support for TCG emulation of SSE4.2, but does not feature support for later vector extensions, such as AVX. The goal of the proposed project is to implement and test AVX instructions that are currently not implemented in QEMU's TCG.
InSilico is an extensible editor which can be used for manipulating and analyzing many different file types, SBML files in particular. The current implementation of InSilico has Maven Tycho and Eclipse PDE as its build system. The existing build systems will be replaced by Gradle. The goal of this project is to create a Gradle plugin that allows users to configure their OSGi application in easily understandable commands. Upon completion of this project developers will be able to complete all the above noted sub tasks by just interacting with the Gradle script. Also interacting with gradle script does NOT mean writing complex scripts but in fact will just include using of simple gradle command developed through this project. This in turn will encourage developers to create plugins and features for InSilico and making the job of developer and users easier !
We are planning to make a R package mlr3viz to implement the visualization for machine learning objects in mlr3 package. This proposal includes the designs of the package, plans of development and deliverables in each stage, and introduction about my biographical information to show that my background fits this project.
This project focuses on adding various new features to Popper 2.0 like the addition of various runtimes, adding more subcommands, developing a searchable actions library, working on remote workflows and files, report generation, etc. It also focuses on building an Automated Popper Compliance Verification System which would verify that a pipeline is popper compliant.
Practically speaking, in science, engineering, and statistics, most computations are done numerically. My project aims to implement a library that can solve for Chebyshev polynomials. For complex mathematical functions, Chebyshev polynomials allow us to compute perform operations such as differentiation, integration, and solutions of ODEs with both greater speed and the same numeric accuracy as when dealing with the original functions. In this project, I hope to implement a Chebyshev polynomials module, along with the appropriate documentation, residual functions to validate the approximations, and test cases. The functions associated with the Chebyshev polynomials will be able to be used alongside the rest of the existing hmatrix library. I will also integrate the Accelerate language, an embedded language built to allow high-performance parallel arrays for Haskell. This library naturally fits into this project, because it allows for quick computations on large multi-dimensional arrays. Numerical algorithms are often used to solve large systems of equations, that are easily converted into large matrices of data.
Adding a Version Control API to Godot which supports working with multiple types of popular VCSs (both distributed and central). This API shall be used to implement multiple VCS friendly features to the Godot Editor. The VCS API will be supporting multiple popular VCSs (like Git, SVN, Mercurial, and Perforce).
Since Godot is gradually entering the competitive market, dominated by Unreal Engine and Unity3D, it is only intuitive that it should support Godot game developers with a strong version control integration from within the editor.
Amahi is basically a cloud-based server. The features that Amahi iOS currently includes are browsing files on Amahi servers (HDAs), storing them for offline access and streaming multimedia.
I propose to add more features to the project, improve the existing ones and achieve features that are currently active in the Android counterpart.
The following are the deliverables I propose to achieve during the GSoC program: Chromecast Support, Improved Multimedia Players, Display Recent Files, Upload/Delete Files, Friending, Secondary User Login
Apache Dubbo already supports algorithms for balancing traffic such as Round Robin and Random but none of them take into consideration that some servers may be busy handling a lot of requests, that there is a huge hardware difference between them (for example, server Y has much more processing power than server X), and also that it can overload specific servers when you have load balancer nodes. Is this scenario, Load Balancer may send a lot of requests to a server and delay its response (higher response time) when you have other servers with lower response time available. The new load balancer should know about the server’s health, isolate anomalous servers based on statistics and forward the request to a capable server. Dubbo is designed with the SPI mechanism, which allows a new implementation of LoadBalance.java interface and has a class developed that can be reused to get health information such as MetricsFilter.java.
The goal of this project will be to move the existing functionality of the angular code over to react. The new components will be using web standards as much as possible, avoiding extensive use of third-party libraries.
NumPy is the fundamental and most widely used library in Python for scientific computation. But it is executed over CPU only. So we have CuPy with same syntax as NumPy to leverage the power of GPU. But current problem with cupy is that, it contains many functions provided by numpy, but not all of them. So in this project I want to implement a “fallback mode” for CuPy. That is, whenever a function is called by the user that is not yet implemented in cupy, it will automatically call respective numpy function and get the result.
There are a number of bugs and features required in the current version of code. Though they do not hinder the user’s working, solving them would make the user experience more seamless and smooth. The aim is to make the code free of, and less prone to, bugs, and add features to further enhance the accessibility, adaptability, and user-friendliness.
Building a free version of Amazon rekognition with maximum possible feature during a 3 months’ time span.
The goal of this project is to implement a feature which will allow to export Synfig animation for web through Lottie format (http://airbnb.io/lottie/). Lottie is a library that parses animations exported as json and renders them natively on mobile and on the web. The feature will be implemented as a plugin in the Synfig Studio. As a result, Synfig will become a platform for creating animated content for web.
Currently Chapel has support for ‘shared’ lifetime-managed objects, which is implemented using reference-counting. Unfortunately reference counting has associated with it very large overheads that are remedied by other types of memory reclamation. This project focuses on exploration of Epoch-Based Reclamation and the newer Interval-Based Reclamation, first via their implementation in shared-memory in a way that allows them to be direct contributions to the language, and then via adaptations and synthesizing possibly newer memory reclamation algorithms from the above mentioned or even from newer, novel ideas.
OpenType SVG is a reliable format that is growing in popularity and is currently supported in Windows 10+, macOS Mojave and iOS 12. It’s well supported in most products of the Adobe Creative Suite and by some popular browsers like Microsoft Edge, Safari and Firefox. FreeType is already a powerful font renderer. It’s being used in Android, iOS, macOS, PlayStation Consoles as well as ReactOS. Support for OpenType SVG fonts in FreeType is going to be a very useful and impactful addition to FreeType’s capabilities. I plan to accomplish this by:
This project aims to integrate some of the computer vision based alpha matting algorithms into OpenCV. Alpha matting refers to the problem of softly extracting the foreground from an image. It plays an important role in many image/video editing tasks, like layer separation, background replacement, and foreground toning. To begin with "A Global Sampling Method for Alpha Matting" by Kaiming He et al will be implemented and then we plan to implement "Designing Effective Inter-Pixel Information Flow for Natural Image Matting" by Yagiz et. al. We will conduct experiments comparing results to existing alpha matting algorithms at alphamatting.com. The dataset for experimentation is also available at alphamatting.com.
This project aims at integrating Vimeo’s Psalm into the CivicCRM Jenkins toolchain for static analysis purposes. This project will also work on improving Code coverage report generation for CiviCRM. Both static analysis and improved unit code test coverage reports will make for better quality assurance benchmarks for CiviCRM.
FreeCAD is a powerful tool with worldwide usage but suffers a shortage of developers. This proposal aims to encourage new developers to adopt FreeCAD development by improving their experience with the development environment.
I read the work Jorge Perez has done for GSoC 2018(especially for sun orbiting bodies) and I found it fascinating, I would really love to build upon that and finally incorporate it in the OrbitDeterminator web-app plus I want to add some other relevant features and make the software user-friendly.
Field Officer Application is an application developed for the Bank staff field officer to keep track of Clients, Centers, Groups, Loan Account, Savings Account, etc. Currently, Mifos Android Client is in its Version 5. The app is still under development and this project aims at the release of Version 6 of this app.
My goals will mainly include: adding T-OTP based 2-factor authentication using Google Authenticator app, deeper integration of Notifications framework, integration of SMS communications (in-app push notifications), UI improvements and redesigning and testing & documentation.
Enhancing the CarbonFootprint-API to add and refactor new functionalities. The aim of this project is to give users access to the CarbonFootprint emission data so that they can create applications to make people aware of the adverse effects of their daily activities. The idea of letting people see their CO2 emissions on travelling via different maps services is really unique and exciting. By providing user the interface alongside the API will help overcoming the ignorance of the general public of how their little doings is increasing the Carbon Dioxide generation.
Proposal for commited work over the summer, built on my already acquired experience and knowledge of the Zulip codebase - continuing my work on implementing PGP email features, addressing any current or future email mirror and rate limiter issues/TODOs (areas of the codebase that I'm most familiar with) as well as a couple of other interesting issues in other areas.
This is a proposal for Project#3 Workflow Designer for Continuous Workflows (version3). I sent the previous two versions to the mentor Petr and modified a list based on the feedback. Basically, my proposal is made up from several parts: Synopsis, Project details, Implementation details, Minimal Deliverables, Optional Deliverables, Timeline, Plan for communication and Candidate details. I describe the previous workflow, the current problems and my methods in this proposal.
YellowBricks is an open source python data visualization library aiding both exploratory data analysis and machine learning tasks. As my Google Summer of Code project, I would like to propose building a new visualizer “Effect Plot” to help in interpreting linear model. Along with effect plots, secondary aim for the project would be to extend PCA visualizer by adding features like optional heatmap, colorbar, alpha params, and many other hyperparameters. Also, I would build test classes for Grid Search during this period. Effect plot tells the user about the effect various features of a dataset would have when a dataset is trained upon a particular linear model. A effect plot is basically a boxplot explaining the variance and medians of the effect of each feature. The purpose of this project is to cater to need of user by providing them control over various aspects of effect plots like face color, line color, linewidth, and shape and color of outliers along with many others hyperparameter to tune.
The current Working Hour Plugin provides an interface to set up a schedule of allowable build times but the user interface and usability still need enhancement. Thus a new user interface based on new technologies like React could be used to optimize user experience and code readability.
MuseScore currently supports playback for a majority of their elements. Chord Symbols, however, are purely visual and lack playback. When writing or arranging lead sheets, this can be an issue as the chord symbols detail the entire harmony of the piece. Additionally, others users may want to notate partial ideas or a skeleton of an idea that’s most easily represented with chord symbols. In these cases, this means that a user would only be able to hear a preview of the melody and nothing else. Unless the user is an experienced professional with strong audiation skills, being unable to physical hear the harmony of their music is an important setback.
This project aims to implement the playback of chord symbols in a flexible way to suit different styles and genres of music and be versatile for use with final scores as well as unfinished drafts. Completion of this project will allow users to work with greater efficiency and enhanced feedback on their scores as well as set up MuseScore for additional development in harmony.
CVE Binary Tool is running on Linux systems now by taking advantages of bash commands like file and string. Since file and string have already been naively implemented, it is ideal to extend the tool to other operating systems like Windows.
I will develop a library to acquire data from external sources mainly through forms on an Android application, store them on a local database and handling the conversion to KML data format to be shown on the LG.
The project will seek to implement Nesting feature in Pitivi. Users will be able to select several clips and nest them into a single clip which can be edited in a separate timeline. The nested sequence will be available in the Media Library for further usage and edits made to it will be reflected in every instance of the sequence in the timeline. This will provide an efficient workflow and richer user experience
This project aims to add a more flexible asynchronous file system cache server as an engine module. It will cache any IO operations, and provide more flexibility with respect to the caching behaviour. This project will allow Godot engine to better handle large asset files and potentially decrease load times in the editor as well.
Create a modern dynamic interface for the CScout project
Currently gatsby-source-plone plugin is unable to update,update, delete and create node after fetching data from plone site unless GatsbyJs development server is manually restarted or GatsbyJs site is rebuilt. This project plan is to enhance existing project "gatsby-source-plone" with features that allow "gatsby develop" development server to provide up-to-date instant live previews of pages that contain data from a Plone CMS source. Plone is an enterprise open-source CMS solution written in Python. The aim is to enhance "gatsby-source-plone" to provide a GatsbyJS development experience matching the experience provide "gatsby-source-filesystem" based plugins, which can create, update and delete nodes on from filesystem events.
Floating point number operation are not guaranteed to hold correct results. A certain range of errors can occur silently (overflow, underflow, imprecision, etc). This library aims to provide a replacement for floating-point number that warns on error.
The library is already in a relatively advanced state. The goal of this project is to bring it in a review-ready state. In this application, I suggest exploring a new design that hopefully should end up being more performant than the current one. I also want to implement the missing features and to complete tests and documentation.
The Linkerd Control Plane right now get's metrics from the Prometheus Instance and the TAP Server and provides them in the cli as well as the Linkerd Dashboard. As Observability is one of the important part of a Service Mesh, It's important that we have a single endpoint where users can query for metrics through out their micro-services and allow developers to build front-ends, alerts on top of it. This project aims to have a GraphQL web interface as part of the Controller which abstracts away all the systems that provide metrics and just have a single type system that users can just query and get data that they need. As more features are added to Linkerd, Adding more metrics about them is just about adding the new types to the GraphQL Type System and adding relevant resolver functions which talk to the relevant back-end to get the data.
"The goal of the Web of Things is to extend the web of pages into a web of things by giving connected devices URLs on the World Wide Web. This will allow the web to be used as a unifying application layer for a decentralized Internet of Things."
WebGL is a web version on OpenGL, i.e a 3D engine. It allows you to make 3D materials in the browser, using JavaScript. It is rendered using the GPU and thus is more performant than regular canvas, so it is also used for 2D games. This project aims to implement some new functionalities for p5.js using WebGL, to expand the current functionality related to lighting, to introduce younger artists to the fabulous world of Computer Graphics.
The rise of reinforcement learning based problems or any problem which requires that an agent must interact with an environment introduces additional challenges for benchmarking. In contrast to the supervised learning setting where performance is measured by evaluating on a static test set, it is less straightforward to measure generalisation performance of these agents in the context of the interactions with the environment. Evaluating these agents involves running the associated code on a collection of unseen environments that constitutes a hidden test set for such a scenario. The goal of this project is to set up a robust pipeline for uploading prediction code in the form of Docker containers that will be evaluated on remote machines and the results will be displayed on the leaderboard.
The goal of this project is to implement an interactive JavaScript view to enable “Active Learning” within KNIME Analytics Platform. While KNIME Analytics Platform already contains a JavaScript enabled node for the manual annotation of samples (Table Editor), a dedicated active learning node would allow users to interact with a KNIME workflow more intuitively and allow for active learning specific functionality such as a labeling wizard or fast multiobject labeling. After the implementation phase, the person working on the project is encouraged to build active learning examples that involve text or images in order to test implemented node(s).
The project aims implementation of GANs in the Machine Learning toolkit, TMVA of the ROOT framework would be immensely useful because of the advent, popularity and versatile nature of GANs. GANs can essentially be used for simulation and physical/mathematical modeling of patterns learned from training data substantially faster and more accurate than any other generative model. The model can be used for generating training data and finds many applications in high particle physics and astrophysical research realms.
Knowledge-based question-answering system (KBQA) has demonstrated an ability to generate answers of natural language from information stored in a large-scale knowledge base. It has attracted a lot of attentions in the research areas of natural language processing and information retrieval. Generally, it complete the analysis challenge via three steps: identifying named entities, detecting predicates and generate SPARQL queries. In these three steps, predicate detection, a core component of this process, identifies the KB relation(s) a question refers to. To build a predicate detection structure, we identify all possible named entity first, then collect all predicates corresponding to the above entities. What follows is to calculate the similarity between problem and candidate predicates using a Multi-granularity neural network model(MGNN). To find the globally optimal entity-predicate assignment, we use a joint model which is based on the result of entity linking and predicate detection process rather than considering the local predictions (i.e. most possible entity or predicate) as the final result.
Slic3r provides auto-arrange feature to arrange the parts to be printed. For the time being Slic3r has a placeholder function for auto-arrange which doesn't actually arrange parts.
SVGnest is a JS library which provides an open-source implementation for nesting algorithm.
Nesting is a term used in the manufacturing industry which refers to the process of finding the optimal placement of parts with different shapes on a single sheet which maximizes number of parts placed per sheet this minimizes the used material which translates to less money being spent.
As to our purposes in Slic3r nesting algorithm could be used to auto arrange the parts on the print bed for more efficient printing by reducing number of required print runs to finish printing all the parts.
The goal of this project ‘Diff for Graphs’ is to implement a graph comparison functionality which will allow users to compare two or more graphs that are accessible by them. This includes graphs created by them, graphs shared with them by other users and all the public graphs. This project extends the work done in ‘Git for graphs’ project - which was a part of GSoC ‘18.
JPF is a model checking tool for java applications. JPF-core is the core structure of JPF. The build for jpf-core has been moved from ant (upto java 8 support version)to gradle. The current JPF-core version doesn’t have java 11 support i.e jpf-core is not portable for java 11. Jpf extensions have not moved from ant to gradle yet. This is because of the potential breaking changes from its previous versions. One such major breaking change in java 11 is “bootstrap methods”. The goal of this project is to fix gradle support for java 11, to update the extension template,provide the widely used jpf-extensions with gradle support.
Refactoring tools help pharo developers to perform a number of predefined refactorings in automated fashion. However, besides the options provided by Pharo, there is still some missing refactoring options and a list of open issues.
The goal of this project is to improve the Pharo’s refactoring support by
One of the ways that MuseScore lags behind other pieces of software is in its handling of instrument changes. Currently, to change between pitched instruments, you add an instrument change object, right click on it to change the instrument, and then change the text manually. This is quite unintuitive, and there are is currently no way to change between pitched and unpitched percussion. I would like to implement a number of changes to streamline the process, and bring the functionality up to the level of competing pieces of software.
The Internet is a place where people come to find out information and due to the rapid growth of the use of social media there is so much data collected. But due to the misuse of social platforms, people have created a lot of inaccurate a fake content which looks real.This problem is really important to solve because if not, the percentage accuracy of the information on the internet will drop rapidly and we will not be able to believe anything that we see / read on the internet.
The Fact Bounty project is intended to solve this huge issue. The high level idea of the project is to use crowdsourcing to identify the validity of content posted in social platforms. My role in the Fact Bounty project would be to implement a web application which users can use to contribute to classifying content and view validity of content is a visualized manner. The following proposal contain information about myself and states how I intend to implement a web application for Fact Bounty!
The aim of this project is to port WebKit2 to Haiku and create a minibrowser to demonstrate the functionalities offered by the completed port. This is done to replace the old rendering engine WebKitLegacy used by Haiku's mainstream browser WebPositive.
I plan to package Kotlin 1.1.1 so that the android sdk tools packages that depend on Kotlin can finally be packaged. I am also planning on updating the other android sdk packages that are still in android 7.0.0.
Through ISC Kea’s RESTful API, administrators can remotely perform both read-only operations like querying statistics and read-write operations like changing configurations on-the-fly. In addition, Kea allows third-party hook libraries to define custom operations on its RESTful API.The current API implementation only allows users to performing all operations using HTTP POST requests with JSON payloads that contain command details. It is desirable to extend it with HTTP GET support so that operations with different security implications can be separated, enabling administrators to have finer-grained remote control over Kea. This proposal aims to achieve the above target by implementing an alternative API that reads command details from request methods and URI instead of JSON payloads. The structure of the new API will be derived from existing commands using RESTful principles. A hook point will be exposed to third-party hook libraries to define custom routes. And a request dispatcher will be implemented to parse command details. The existing API will remain unchanged to maintain backward compatibility.
Add Structured Commons support to Commons Android app
Measuring the similarity between two geometric models is an important problem in diverse fields, including computer graphics, computer games, and geometric modeling. For example, they can be used in terms of benchmarks to determine the variance of a processed geometry from a given ground-truth model.
One such similarity measure is the Hausdorff distance. Given two compact subsets A, B of a metric space, the one-sided Hausdorff distance between A and B is defined as the distance between an element a from A and an element b from B, where a is chosen to maximize the distance and b is chosen to minimize it. Intuitively speaking, the Hausdorff distance measures the maximum deviation between two models.
The project consists of implementing the algorithm for interactive Hausdorff distance computation presented by Tang, Lee, and Kim. While it provides an approximation to the Hausdorff distance - as already present in the CGAL framework - it ensures the computed measure to be within a user-specified error bound. Thereby, the user can obtain as good results as necessary for the applications at hand within a fast and efficient pipeline.
The project consists of the creation of three new topological rules in the gvSIG desktop toolbox. These rules are '' Must be disjoint '', "Must not have dangles" and "Must be larger than cluster tolerance" that will help to verify the integrity of the spatial information, validate the representations and correct possible errors of the point, linear and poligonal geometries. Definitely, ensure the quality of geographic data with open source software.
RAMSES is a 3D graphics framework concerned with the efficient distribution of graphics among multiple screens for automotive purposes. While the framework is almost complete, it lacks a tool that can load, modify and export 3D assets in a suitable format so applications developed with RAMSES can use them at runtime. The following is my plan for the duration of the program:
During pre-dump in CRIU, the memory pages of target process are stored in pipe-pages, until the content is not flushed to disk image or page-server. Primary issue is bloat of irreclaimable memory due to pipe-pages. Pipe buffers are pinned in memory making them non swappable. Pipes have maximum size restriction, so there could be many pipes which results in memory pressure during pre-dump process. Replacing pipe-pages with userspace supplied buffer from CRIU will alleviate this memory pressure. Since, the buffer pages can be swapped out.
Another issue is duration for which pre-dump algorithm freezes the target task. Use case like live migration expects smallest possible freezing glitch. So, objective is to reduce this freeze time during pre-dump.
To achieve both of these objectives, we use process_vm_readv system call with a set of VMA-list gathered by freezing target task for minimal amount of time. There are challenges in handling memory areas with this approach, like race conditions resulting in false reads. Development of graceful solution for these challenges is objective of this project.
Enabling Containerd to have remote blob store for image content (layer blobs)
Main goal of this project is extending available datasets, offering some additional features to Tensorflow Datasets that can make TensorFlow users' lifes easier.
This project is about building an embodied cognitive simulation, i.e. that in which robots we call vehicles have a body and a simple "mind", represented by a neural activational network. The body has a defined shape, activator sensors that capture signals from the environment, and motors, that move the vehicle as a reaction to the signals. We then evolve neural networks inside vehicles using a Genetic Algorithm with an appropriate fitness function, and hope to observe some natural behavioural patterns as well as certain connectome motifs seen in nature. This would allow us to reproduce the very same behaviours’ simulation, as well as hypothesize on correspondence of connectome motifs to specific behaviours. This project's results have possible applications in brain development studies, as well as transferring synthesized behaviour models to robots and enriching virtual embodied systems’ intelligence (e.g. game AI).
Nuitka has support for many built-ins, e.g. len already, which means dedicated C code, compile-time evaluation, type shapes produced (in this case an int), but there are some notable exceptions, e.g. enumerate where we know types too, that are still missing but definitely can have high-performance impact on some loops. Not having that means that enumerate using loops are loosing out on many optimization opportunities. This project aims to identify and optimized missing built-ins to achieve complete support for ultimately all C built-ins.
While creating a Portfile for MacPorts one has to manually write the file or depending on the upstream use tools such as pypi2port, cpan2port, etc. So as to not deal with different tools for each type of package, Universal Packaging Tool (UPT) can be used. Main aim of this project would be to bring macports support to upt and adding features to UPT such as recursive packaging and updation.
Heralding is essentially a credentials Honeypot. It can log credentials(username & password) for many protocols, but some protocols do not transmit password during authentication, they transmit a hashed value of password often combined with some salt. This project requires implementing/improving RDP and VNC protocols. Also, to design a method crack the hashed password of some protocols to gain insight on the passwords an attacker uses, targeting VNC and RDP.
BridgeDb is a platform used for identifier mapping in WikiPathways and PathVisio with support for various languages like as a Cytoscape plugin, as an R package, and an API implemented in OpenRiskNet and OpenPHACTS. BridgeDB provides a platform to link other databases like Ensembl, NCBI Gene, ChEBI, PubChem, etc. The main idea of this project is to add functionality and release a major version of BridgeDb. This new version would support Java version above 8. It will also have support for ‘Secondary identifier’. As a part of the project, metadata output provided by BridgeDb will also be formatted to meet the ‘The FAIR Guiding Principles for scientific data management and stewardship’.
The ultimate goal of my project is to redesign and redevelop the GTK’s official website https://gtk.org by providing it a design that follows current trends and content updation that really matters to the users and developers by using modern static site generators. This website will use Gitlab CI for deployment purposes. The project is a major milestone belonging to the release of GTK 4.0.
Current version of GitLab plugin on Jenkins doesn't fully support Multibranch Pipeline Jobs. There exists another unofficial plugin (Gitlab Branch Source Plugin) which hasn't been released yet. There has been abandoned effort to combine both these plugins due to multiple issues mostly because of different APIs both the plugins use. The project's plan is to develop a new Gitlab API plugin that can be consumed by other plugins and a new Gitlab Branch Source plugin which will provide all freestyle, singlebranch and multibranch pipeline support. The existing Gitlab Plugin will only contain auth, triggers and other Gitlab-Jenkins Configurations. All the partial implementation of Branch Source functions will be deprecated.
Last year, I was successful in completing a lot of important functionality that brought Cabal's v2/new-style interface closer to feature parity with the classic interface, but improving this core part of the Haskell developer infrastructure is perhaps one of the most universal ways to benefit the Haskell community, and there is still work to be done.
ChainerX is a versatile ndarray implementation with special support of deep learning-specific operations. Therefore, it is important to support many fundamental operators usually available for ndarray libraries (e.g. those provided by NumPy and SciPy) as well as special operators focusing deep learning applications (e.g. convolution, pooling, activation functions, etc.). While Chainer implements many of these operators, ChainerX still has low coverage. Full coverage of Chainer’s operators (chainer.functions) and more coverage of NumPy APIs is needed.
Linear algebra operations form the backbone for most of the computation components for many machine learning methods and simulations of physical systems. This project aims at extending ChainerX capability for common numerical linear algebra routines.
viNLaP is an interactive and data-driven web dashboard with three modules, each one based on one of these three main analysis: Spatial, Temporal and Statistical/Traditional. Each module will include traditional visualizations related to the respective analysis but also novel visualizations based on the proposed ones in the literature. The proposed viNLaP is to visualize in this first scenario: polarized data; but it is built to be useful for new types of dataset that would come in the future.
The project aims at creating a rich beam module and also extending the current continuum mechanics module by integrating cross-sectional geometries in beam module and enabling it to draw the basic diagram of the beam using matplotlib. Further, it aims at implementing Column Buckling and its corresponding calculations. Finally, Truss structure analysis using the method of joints has to be implemented as a part of the continuum mechanics module.
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. So, It is better if Apache Gora can use redis DB as well. Then this proposal is mainly dedicated to providing Redis compatibility for Gora. For such, there is a need to implement a new datastore, and Apache Gora gives a great facility for this.
Multi user synchronous image editing in realtime (publiclab/mapknitter)
This project aims to contribute to the stability of Android port of VLC, by writing test suites for the VLC user interface and the libVlc port for Android. This will ensure that the regressions are caught by developers at an early stage and the released application stays stable.
My primary goal is to develop autonomous autorotation capability for traditional helicopters running ArduPilot. The ability to autonomously detect a power failure and safely execute an autorotation will add a greater level of redundancy to the ArduCopter firmware. This will aid in making traditional helicopters safer to operate, reducing the risk of harm/damage to people, property, and the unmanned vehicle itself.
My secondary goal is to develop the functionality without loss of generality, maintaining its applicability to all helicopters.
Contained within this proposal is a review of previous work done by others. Following this, preliminary investigations show that main rotor head momentum is a critical factor. A methodology is proposed to scale a generic velocity trajectory using the main rotor head momentum in hover. The trajectory will be empirically determined through SITL and real-life testing. The resulting flight mode will provide a robust approach to completing an autorotation manoeuvre for any helicopter. An overview plan of how development time will be allocated, over the GSoC period, is presented. Finally, a short section detailing my background follows.
This project aims to implement several techniques for credit scoring and identification of potential NPAs. These are: implementing a scorecard based on rules; and applying machine learning techniques to help detect bad loans.
This project aims to boost the performance of Image Sequencer project by using WebAssembly for accelerating the performance of various parts of code primarily involving pixel manipulations, using web workers to run various parts of code on different threads and implementing a demo for the detection of indoor pollution levels measured in terms of formaldehyde concentrations using colorimetry badge and Image Sequencer.
The article recommendation pipeline consists of many parts. The project aims to improve article recommendation pipeline by solving the various issues in the article-recommender projects.
The few issues which are going to be worked upon are:
This project aims to improve the tooling for tensor network algorithms in julia and demonstrate advantages of julia - composability, performance, ecosystem among others - by implementing cutting edge differentiable tensor network algorithms that integrate tools from machine learning, quantum mechanics and mathematical optimisation. The end-result will be a new julia implementation of the einsum-interface and a cutting edge package for differentiable tensor network algorithms, reproducing results of a recent paper that represents the new state-of-the-art in infinite two-dimensional tensor networks.
A framework is to be built, which would be able to run a given test on all the firewalls (pf, ipfw, ipf) since there are a lot of commonalities in the firewalls. For features which are specific to a firewall, a special flag will be specified for the firewall to be tested against this test. The framework is intended to automatically set up the required firewall with the required configuration for a test, all this will be scripted within the framework. For setting up a firewall, vnet(9) and jail is to be used to virtualize network stacks. The list of tests to be integrated into this framework are as follows:
KDE Connect is an application to sync mobile phone and laptop device notifications, share files between them, and provides plugins extending the utility of KDE Connect based on various events like pausing any playing media when the phone rings, et al. The Windows port of KDE Connect exists, but has a lot of problems and is currently not suitable for a release. This project aims to fix the Windows port plugins, add system integrations and make a release to the Windows Store.
By the end of this project, we will have a release-able installer shipped to the Windows Store for review.
The ultimate objective of this project is to develop an automated software system with a graphical user interface that can estimate the manufacturing cost and time based on machine drawings.Machine drawings are the standard procedure for providing precise details necessary for the engineering manufacturing processes of products. These machine drawings include dimensions, various symbols, and the 2-dimensional views, describing the geometry of the product.
The OpenMRS platform can be extended via two kinds of add-ons: (1) Modules, and (2) Open Web Apps.
Originally we built an "OpenMRS Module Repository" where people could upload their modules, but now days there are lots of other repositories where people can publish the code they build, and we want to support a distributed ecosystem that allows people in the community to publish their OpenMRS add-ons wherever they want, as GitHub Releases, to Bintray, to Maven, etc. So the Module Repository has been replaced with an "Add-On Index."
The goal of this project is to make various enhancements to OpenMRS Addons, making it easier for end-users to sign up for updates, show module and tag stats, and add support for Github releases.
Machines have been socially aware but the efforts to be put into making them socially aware or their behavior to be socially acceptable are tremendous. This is an effort in the same direction to make a machine more socially aware with the help of machine learning techniques using graph data. Graph data is better able to extract the semantics of the environment. This project focuses on the graphical representation of a scenario which helps us to derive the social acceptability score of a robot. Hence, using graph data coupled with the power of machine learning, this project is a step closer to social intelligence embedded in robots.
In the proposal, I described and demonstrated my ideas about the visualization tasks: 1) Clusters of Event Category Over Time 2) Distribution of emotions across nations and networks 3) The emotional intensity of a single event cascading through the international news landscape
In this project there are two major goals: 1) improving the existing translators from Italian to Catalan, from Portuguese to Catalan and from Catalan to Portuguese, obtaining a WER below 15% and a Wikipedia coverage above 91%; and 2) developing a translator from Catalan to Italian with a WER below 15% and a Wikipedia coverage above 91%.
bdchecks is an infrastructure for performing, filtering and managing various biodiversity data checks using R. Data checks are a key to promoting biodiversity data quality. bdchecks offers various features for different types of R users:
Improving the quality of biodiversity research, in some measure, is based on improving user-level data cleaning tools and skills. Adopting a more comprehensive approach for incorporating data cleaning as part of data analysis will not only improve the quality of biodiversity data, but will impose a more appropriate usage of such data.
FreeBSD includes support for the kernel coverage sanitizer and undefined behaviour sanitizer, however support for the other sanitizers is missing. These are useful to find bugs while fuzzing the kernel.
Port one or more of KASAN, KMSAN, and KTSAN to work in the FreeBSD kernel. Use the ported sanitizers with fuzzers (syzkaller or triforce) in order to find more memory vulnerabilities.
Apache Gora is an opensource framework which aims to give users an easy-to-use in-memory data model and persistence for big data frameworks with data store specific mappings. The overall goal for Apache Gora is to become the standard data representation and persistence framework for big data by providing easy to use Java API for accessing data agnostic of where the data is stored. It uses Apache Avro for data serialisation and depends on mapping files specific to each datastore.
In this project, we will develop a Benchmark module that will help to identify and understand the various performance characteristics of Apache Gora. It will also help to identify the overhead incurred by Gora compared to the use of native NoSQL systems. This will help in fixing bug and aid performance improvement. The performance characteristics may range from execution time to resource utilisation. The proposed module could be used to benchmark and compare native implementation vs Apache Gora implementation.
The project proposes the creation of content bringing content inspired by the Forestry and ExtraBees mods for Minecraft to Terasology.
The content proposed includes an improved framework for merging items and blocks common to multiple modules (e.g. metals); a system providing tools for storing and simulating various genomes and their interactions; and a gameplay module, integrated with the currently existing JoshariasSurvival module, featuring bee breeding, genetic modification of bees, and renewable resource generation using bees.
Current logging implementation in CRIU uses standard output printfs for any log-style messages. This obviously brings some unnecessary resources consumption to scan the format string for all the control sequences, which can be omitted with binary logs (as this is already done in many speed-sensitive projects, e.g. systemd logs which can be stored as binaries). The project idea proposes changing traditional test logging based on printf into the binary log implementation to save CPU cycles and thus provide faster checkpoint/restore for the contaner-based virtualization systems.
Analysis of the architectural performance of WARP-V using FireSim and RocketChip Chisel code. Adding WARP-V to RocketChip to utilize the capabilities of RocketChip in generating a whole SoC. Integrating WARP-V with RocketChip components: L1 Cache, TLB and Page table walker to make RocketChip, WARP-V version, able to run Linux. then running RocketChip, WARP-V version, on FireSim to analysis the performance of WARP-V.
The Purpose of this project is to add advanced label formatting features to the ccNetviz library for graph Visualizations without affecting the performance , at the same time adding corresponding UNIT tests to the components added.
Understanding actions and interactions of humans from the RGB-D sensor input can significantly improve cognitive functions of robots and help safely and smoothly incorporate them in the world of humans. Human activity recognition component will thus be a significant addition to the functionality of RoboComp. Human activity recognition in the video is an interesting and challenging task and recent research shows that there are different ways to address the spatial properties of human actions and their temporal dynamics. In the course of this project I plan to start with the simpler LSTM architectures and iteratively test and improve the model to achieve state-of-the art results on the selected datasets with the final goal of providing ready-to-use RoboComp components.
This projects aims to validates the fields in data models and the relation between various models. The final aim of the project is to ensure that all models follow the validation rules and are migrated to a valid form.
This project aims to optimize Apache Nemo I/O with two main approaches:
Visual Scripting Language is a part of Godot Engine that is fairly underdeveloped. But it still is a really powerful system which just needs to be properly improved for it to reach maturity.
The idea is to add UI and UX changes to Visual Script Language that make it much less intimidating and allows for a much friendlier interface and has a Simplified API rather than being a GDScript with Nodes kind of a setup.
We can achieve this by abstracting away all the programming concepts behind the wall of UI/UX features such as rename functions to groups, make it all a Unified Graph and just simply have as much of the code as we can infront of the user without him having to jump from one place to another.
And also allow saving these Groups to add submodules and simplified process of creation.
An interval system for the timeline. It introduces new features to Pitivi video editor such as set up a range of time in the timeline editor, playback specific parts of the timeline with or without loop, export selected parts of the timeline, cut or copy clips inside the interval and zoom in/out the interval. The timeline interval system makes Pitivi a more professional tool, allowing the introduction of effective workflows.
This project is to Implement work-stealing scheduling into the GCC implementation of the OpenMP standard. Task parallelism often yields highly imbalanced tasks. Work-stealing is a scheduling methods widely used to tackle this issue. Implementing work-stealing is essential to staying competitive with other task parallelism frameworks.
Metal Renegades will be a sandbox style game mode set in the wild west where humans and machines coexist. Players will play as robots themselves. It aims to give players an RPG style gameplay with different factions and quests in a sandbox environment. This project will focus on getting basic gameplay systems right and have a playable multiplayer. The result will be a module which allows players to fight, hunt and loot. This would provide the necessary framework to build upon and implement more complex systems.
Abstract Classical numerical linear algebra libraries, BLAS and LAPACK play important roles in the scientific computing field. Various demands on these libraries pose non-trivial challenge on system management and linux distribution development. By leveraging Debian's update-alternatives mechanism which enables user to switch BLAS and LAPACK libraries smoothly and painlessly, the problems could be properly and decently addressed. This project aims at introducing the mechanism into Gentoo’s eselect framework to manage BLAS and LAPACK, providing equivalent or better functionality of Debian's update-alternatives.
Machine Learning has the ability to gain information, process it and give a well-defined output to the end-user. Machine Learning algorithms can recognize patterns in behaviour and create their own logic. I will be applying Machine Learning algorithms and deep learning classification techniques to predict the onset of fever in patients. I plan to do the following over the summer. • Analyze the data and select the relevant physiological variables from the dataset. • Extract the features from the physiological variables. • Use different Machine Learning algorithms on the processed data. • Use deep learning techniques on the processed data.
Proposal of methods to be implemented for Alpha matting. I have proposed to implement the following things:
Increasingly, distributed software development teams rely on online collaboration. The proposed project aims to implement the first skeleton of CoEditing in Che and Theia. In addition, the project would explore and address the synchronization challenges which arise in Online Collaboration. This could contribute to code conflicts reduction and to efficient management of common buffer while working with multiple language servers and editors.
Btrfs is a next generation copy on write (CoW) filesystem aimed at implementing advanced features while also focusing on fault tolerance, repair and easy administration. Currently, Haiku’s btrfs implementation supports reading and writing for directories, but only reading for files. This project aims at adding support for file write operations.
This proposal details
The integration of the UPG routine will improve upon appleseed’s existing SPPM lighting engine by removing bias inherent in the local photon density estimation. The removal of said bias eliminates subtle rendering artifacts including, but not limited to, blurry shadows and caustics, light leaks, flattened geometric details, and ghost highlights (UPG section 6, figures 1 and 11). The in-progress BDPT lighting engine and future hybrid BD/PM lighting engines will then ultimately be able to take advantage of the UPG routine to evaluate eye-light sub-path connections in the context of photon mapping in an unbiased manner.
Aims to improve the experience of using the application on both platforms by solving high priority issues related to how new/old messages are displayed, how unreads/different states of application are handled, and how notifications are displayed. The second part adds a slew of features previously only available on the web app like linkifiers and Widgets.
With its innovative threading model and robust web app, Zulip has received a lot of praise from remote teams that use it. While the desktop app is certainly complete in terms of features, it needs some polish and certain standout features to make it an obvious choice for a Zulip user to install. In this proposal, I suggest the implementation of multiple features like enterprise deployment, replacing <webview> with BrowserView, database migration, automated testing, and a task manager to improve the overall performance and stability of the app.
The Chapel compiler optionally can produce LLVM IR and use LLVM optimizations with –llvm. Going forward, Chapel will use LLVM more and more because the compiler can effectively communicate more important details to the optimization passes. This task is to improve the quality of the LLVM IR, the testing of Chapel with LLVM, and the performance of the Chapel compiler in –llvm mode.
Improving the p6doc command line tool.
CDLI has rich geographical and temporal data at its disposal. Currently, this information is not fully utilized. Although the data schema is being improved, there are significant challenges in exploiting the new relationships available.
The proposed idea is to develop beautiful interactive visualizations of the search results using D3.js which will be displayed on the revamped website of Cuneiform Digital Library Initiative(CDLI) i.e.,https://cdli.ucla.edu/. Altogether, a total of 5 different types of visualization will be developed and the one closest to the search filters will be displayed inside the Visualize tab, along with a dropdown option to toggle between different visualizations. Further, the developed visualizations will be made taking care of accessibility for differently abled users and device screen sizes.
The current version of GNSS-SDR supports GPS, GALILEO, GLONASS and BeiDou Global Navigation Satellite System signals. Right now in BeiDou signals, GNSS-SDR fully supports BeiDou B1l and BeiDou B3I. The primary goal of this project is to make the software receiver compatible with the BEIDOU B1C signals. This project will enhance the software receiver to aid acquisition and tracking of BEIDOU B1C signals that would further expand the receiver's capabilities and would facilitate research on multi-constellation, multi-frequency receiver working with real signals. The demodulation of the B-CNAV1 navigation message of BEIDOU B1C will open the door to innovation in multi-constellation receivers. Along with the fully functional implementation of a GNSS receiver working with BEIDOU B1C signals, this project will help to address topics such as integrity, reliability, robustness, enhanced coverage, and high-accuracy positioning. Additionally, the integration of BEIDOU B1C observables into the position, velocity, time (PVT) solutions will allow the achievement in a diverse range of applications and components.
The existing Poezio client does not have the modern chat features such as infinite scrolling of messages, searching for a message, etc. This project intends to improve the overall functionality of Poezio by implementing “Infinite scrolling using Message Archive Management (MAM)” and adding important features linked with it for general improvement.
The current validation process in tasking manager has some flaws due to which some projects are not validated completely or precisely. Project Managers should be able to choose validators and form teams for their project so that they will have control of who can validate their project data. This will give more precise validations and will also help in dividing large projects among different teams.
The project is divided into two parts.
Currently the TUID service has stability and speed issues.The aim of this project is to investigate, understand the problems related to it and find a permanent solution.
Panda3D is a mature 3D rendering engine both in age and functionality, addressing the need for the rapid prototyping of games while still providing stability for large projects. As a result of this maturity, however, Panda has been slow to take advantage of newer platforms and devices, with no support for iOS and only experimental support for Android. Adding support for iOS will make the engine much more appealing to new developers, while also creating the opportunity to update more antiquated areas of the engine. Many of these features will aid in future work on Android support as well.
Light path recording is a unique and potentially extremely valuable tool in appleseed; however, it is currently relatively limited in capability of both data capture and visualization. This project aims to help remedy both of these by improving the number of quantities that are able to be recorded, improve the visualization capabilities for these quantities, and provide a method for basic data analysis, filtering, and report generation.
In order to do this, a main piece of the project will be to provide a unified viewport in appleseed.studio which is capable of displaying and switching between several possible views of a scene and overlay data and widgets on top of it. This will greatly improve the usability of appleseed.studio as a scene mastering tool by allowing a technical artist to view exactly how different parts of a render are being affected by lights, objects, etc. and will pave the way for even more useful data integration to be added in the future.
ROS has changed the world of Robotics, now we need not write every single library for a new project which has already been implemented before. Researchers and developers can work on top of developed tools and libraries. ROS2 Has brought good changes with it, including all features of ROS along with new ones to address shortcomings in earlier version. To make use of those characteristics it is important to migrate our tools and drivers. This project aims to adapt drivers and some crucial JdeRobot tools to make them ROS2 compatible. In this proposal visualization, calibration and visualstates tools are targeted.
Building hardware requires access to costly tools and hardware, furthermore complexity in hardware design remains a deterrent force. FPGAs in the cloud solved the high upfront cost of the hardware, and some aspects of tool availability, but not the complexity and scale yet. Moreover, reuse of the already existing IPs is extremely difficult.
For this Summer of Code project, an extension to the FPGA-Webserver project (https://github.com/alessandrocomodi/fpga-webserver) is proposed to provide an end-to-end solution for accelerating web applications and developing hardware using cloud FPGAs.
A ready-to-use environment like this could potentially accelerate compute-intensive tasks in web applications and could appear in online HW design tools as backend.
The requirement of the project is to create a workflow for entity linking between DBpedia and external data set. Entity linking is a process of ontology alignment or ontology mapping between source and target ontology. In this workflow, the goal is to detect similar concepts/classes or instances/individuals between the source and target ontology. There are two levels of linking, the first one is schema level linking and another one is instance level linking. Schema level linking is mapping/alignment between concepts/classes of the source and target ontology and on the other hand, instance level linking is mapping/alignment between instances/individuals of the source and target ontology.
This proposal presents an approach for ontology alignment through the use of an unsupervised mixed neural network. In this workflow, we will explore reading and parsing the ontology and extracting all necessary information about concepts and instances, generating semantic vectors for each entity with different meta information like entity hierarchy, object property, data property, and restrictions and designing User Interface based system which will show all necessary information about the workflow.
The 3D structures of protein and organic molecules are tough to understand since not all can visualize their structures in the mind. The project aims to build a real-time web application using socket programming(just like in online multiplayer games) that shall facilitate an active learning environment to visualize and learn the functions of each component in the 3D structure. This one of a kind software that brings in the best of online multiplayer gaming and virtual labs will be used by teachers, students, and researchers all across the world.
An old-school survival game made using Pocket Code IDE and Tested on the Android Architecture. The basic gameplay is made up of different levels linked together in a forest, through which the player has to survive all the obstacles & enemies present to make progress. Each level has an unique design with special terrain elements with added power ups and weapons for the player to choose from. As the user progresses the game morphs itself into higher difficulty levels which houses extra traps and harder enemies to fight through.The goal is clear - Survive whatever The Forest has to throw at you.
The game is supposed to be presented as an showcase of the capabilities of the Pocket Code Application. It will serve as a reference to all the wonderful features that can be implemented with ease on the Pocket Code.
Open Event Organiser App is the app used by event organisers to create events and manage them. It uses the Android JetPack components and advanced libraries. During the GSoC period, this project aims to develop various new features for the Open Event Organiser App such as:
Fedora has an android app which lets a user browse Fedora Magazine, Fedora Ask, Fedocal etc within it. This app is build using the Ionic Framework, Angular and Cordova. Essentially it is a cross-platform hybrid app. In the current form, the application is deprecated and needs to be properly upgraded. This project aims to improve the existing Fedora App and port to Ionic 4 and build an iOS version of it.
OSEM( Open Source Event Manager ) has the functionality to allow submissions for Booths and Tracks and evaluate them to either accept/reject the submission. This proposal proposes to further improve the process by adding more functionality by adding new features and implementing appropriate enhancements to the existing features. Some of them include customizing individual submissions to include additional details, ability to comment and rate submissions and making it possible to link different submissions.
NeuroEvolution of Augmenting Toplogies (NEAT) is a genetic algorithm that can evolve networks of unbound complexity by starting from simple networks and "complexifying" through different genetic operators. It has been used to train agents to play Super Mario World and generate "genetic art".
Multi-Objective Optimization is an area of multiple criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective optimization has been applied in many fields of science where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives.
I propose a project where I implement a NEAT framework and use it to optimize single-objective functions within ensmallen. Besides this, I will implement a framework for multi-objective optimization within ensmallen along with a multi-objective optimizer, Unified NSGA-III. If time allows, I will use these two frameworks to implement MultiModal NEAT (MM-NEAT) to train an agent in multi-objective reinforcement learning environments.
The 2D Arrangement package can be used to construct, maintain, alter, and display arrangements in the plane. Once an arrangement is constructed, the package can be used to obtain results of various queries on the arrangement. The demo uses the 2D Arrangement package to show its capabilities. A potential user can quickly determine whether the package can be used to solve a problem they may have and how to implement it. The current Arrangement Demo has support for
The demo can demonstrate to the user more types of curves that are supported by the library like Bezier and Algebraic curves. Certain aspects of the demo also need to be improved, like the parsing of the polynomials for the Algebraic curves and the input and output functionality
The purpose of this project is to verify the convergence of the training algorithms provided in 69 Neural Network R packages available on CRAN to date. Neural networks must be trained with second order algorithms and not with first order algorithms as many packages seem to do.
Due to the large number of packages to validate, the work has been split among two students. Being Student 1, I will validate 35 packages and prepare the communication for one new task view and websites including, but not limited to, Rpubs and R-bloggers.
At the end of the program, a package will be made available to Neural Network package authors and maintainers to verify and test new algorithms by themselves.
CGAL’s components are used with other libraries to achieve complex tasks, some of which are suffering from the lack of interoperability while some others from performance issues with the existing wrappers. In this project, two other libraries that can be used with CGAL are considered: OpenGR and pointmatcher. Although OpenGR has a wrapper to work with CGAL types, it suffers from duplication of the point type due to lack of proper abstraction layer. For pointmatcher, there does not exist a wrapper, i.e., an interoperability channel to use its ICP method on a CGAL point cloud that global registration is applied. The first goal of this project is to introduce an abstraction layer around point and normal type for the OpenGR library, so that, CGAL types can be directly used. The second goal is to have a new wrapper for the pointmatcher library that will provide an ICP method that can be applied after the global registration to a CGAL point cloud.
Athena framework is being upgraded to run in multithreaded environment and the aim of this project is to create a new Atlas performance monitoring infrastructure which observes the performance of MT processes. What performance measures to implement, how to visualize the results and how to test the system are discussed throughout this paper. I also present my biographical information and related experience.
Once a patient is created, the system allows retrieving them through either their patient id or name. coreapps has a client side approach to searching by gender and age once a name or id is entered more or less filter by the 2. This project aims at doing server side patient searching using any or more than one search criterias; such as name, id, gender, dob/age etc.
WildFly Elytron is a new security framework developed by WildFly to provide a single unified security framework across whole of the application server which replaced the pre existing Java Authentication and Authorization Service (JAAS) which used to be the standard Pluggable Authentication Module (PAM) information security framework. To extend Elytron’s functionality beyond the existing security realms, custom security realms can be implemented using existing Elytron’s APIs and SPIs for their usage in WildFly Elytron subsystem. Now to extend this functionality of custom development to scripting languages other than Java, Java ScriptEngine is used. During the summer as the project of GSoC under JBoss Community, I’ll be working on extending the functionality of WildFly Elytron’s implementation of custom security realms using other scripting languages.
Symbolic execution is a powerful analysis to systematically check assertions in programs. However, the already notorious scalability problem of symbolic execution is exacerbated by assertions. In our previous studies, we have introduced parallelism to check assertions with Symbolic Pathfinder (SPF) either with static analysis or dynamic analysis. In this work, we propose to combine static and dynamic analyses for parallel analysis to achieve better scalability, to further reduce the cost of symbolic execution using compositional and incremental assertion checking with SPF when the code or assertions are changed.
The project aims to add more options to fabric maven plugin for building images for OpenShift build mode. Right now fmp supports two kinds of binary builds namely s2i and docker for OpenShift build mode. The project aims at extending the support for other options such as Buildah and GoogleContainerTools/jib. The projects aims at solving certain aspects such as:-
Also there will be unit test cases added for each added feature.
UI and UX design is a subject of great importance in app development. The three pillars of my contribution to the enviroCar project are as follows:-
The scp-dbus-service of system-config-printer was written in Python. This makes it depending on Python and also makes it loading the needed Python libraries into memory when started. This causes delay during boot. Here, the student's task is to turn the scp-dbus-service of system-config-printer into C, either as D-Bus service (would work out-of-the-box with many GUIs) or as a C library with API (simpler).
Create a Kiwix Hotspot app that would be an extension to the Kiwix Android App. This Kiwix Hotspot app would allow the users to share their ZIM Files with others on the same network. This can be made possible by working with Hotspot service and communicating with kiwix-serve via JNI.
This Project Aims to improve the openmrs-reacts-component repository.
The aim of the project is to develop a method, easy-to-use by nontechnical and technical user to import content, export content or move content securely between sites (either Plone or other CMS) using an interactive online UI. A fully polished release as an installable add-on and possibly accepted into Plone 6 as a core component is what mark the success of this project.
Creating a Joomla 4.0 frontend template will be much easier by using an intuitive page builder. With this, users can desire the position of elements by a grid, buttons and drag'n'drop and select the information these elements should display. For a faster development the JavaScript framework Vue.js will be used, which provides a strong base for creating reactive components. The plan is to have one month for building the page builder and one for the implementation in Joomla. Final tests, integration and documentation is part of the third month. Years of experience in component, backend and frontend development helps to understand Joomla. Git and JS were part of past projects too. So let's give Joomla a page builder out of the box and give every user the possibilty to build customized templates.
The aim of this project is to design a GIN micro-service which allows the users to design efficient workflows for their work - probably by automating Snakemake, and build the workflows with a Continuous Integration (CI) service. Given the GIN user base of neuro-scientists and other professionals from the related fields, shouldn’t be involved in writing thousands of repeated workflows for their data, and then testing it manually. This tool will increase their efficiency by almost exponential levels by eradicating redundancy from their work.
Ping has been one of the basic most used utilities for network diagnostic. Because of this the major aspect concerning it is ensuring its long term quality. Currently, there are two implementations of ping in FreeBSD: ping is for IPv4 networks and ping6 is for IPv6. They have a lot of duplicate code. To ease maintenance of this tools (adding new features, fixing bugs) the primary goal of my work is to create a single implementation, containing support for both IPv4 and IPv6 networks, where the code duplication will be eliminated and the functionality and output will be equal to the old implementations. The secondary goal is to document my work so that it could be used as a guide for other programmers who would do some similar unifying work.
The main aim of my project is to implement Top 10 IoT vulnerabilities which were published in 2018 by OWASP and at the same time building a vulnerable Vmware image which can help security professionals to test their skills and tools in a legal environment, help IoT/hardware developers better understand the processes of securing IoT devices and aid teachers/students to teach/learn IoT application security in a classroom environment. I also intend to take up few ideas from Internet of Things (IoT) Top 10 2014 of OWASP and inculcate them in my project which can provide the best features for testers.
I'll mainly focus on the following documented vulnerabilities :
In this project we will work on the new CC systems (Beluga, Graham, Cedar, Niagara) and potentially mp2 trying to automate the setup of things like the account ID, meaningful default resources limits specific to each system, etc., in order to have zero configuration work done by users before being able to submit jobs. We will also work on creating some built-in safeguards against system abuse by novice users when they use this package. Documentation and tutorials will be easy to read and to understand and will be created at the same time the code is created in order to avoid delays in the delivery of the package. There will be use-case integration with PopSV, a package that could integrate this extension in order to demonstrate how easier this makes life for users of PopSV on CC systems. Time permits, we could potentially provide another software that could benefit from this e.g. SCones.
The project adds a number of safety features to WAL-G, including retention policy improvement, history consistency checks and page checksum verification.
This project aims to migrate the whole frontend codebase from the existing AngularJS to Angular. AngularJS has entered LTS version and Angular is its successor. The project aims the migration to happen without disturbing the user flow and developer experience.
The project deals with analysing the implemented C functions and then implementing their corresponding assembly function using NEON registers and advanced SIMD (co-processor 10,11) instructions which will enhance the efficiency and makes dav1d more faster. There’s also performance testing is involved after the assembly function is implemented by using performance counters, for improving production quality of dav1d across ARMv8 and ARMv7 devices. Currently project is being tested and analysed on Raspberry Pi 3 model B which has BCM2837 quad-core ARM Cortex A53 (ARMv8) cluster.
Open-event is currently undergoing a revamp from v1 to v2 and I want to contribute towards improving the UI aspect of the project while focusing on implementation of featured events, admin section enhancements and revamping the server accordingly.
The ultimate goal of the proposal is to provide a seamless integration between Gnome Music and MusicBrainz services. This integration will include retrieval of tags and cover-art from MusicBrainz.
The main goal of this project is to reexamine the assumptions underlying the segregation of rhythm from pitch in these widgets and to design and implement a more unified experience. This can be done by improving the already present widgets inside music blocks as it will not cause any changes in the usage of any other section of music blocks and will not confuse the existing users . Currently in music blocks a user has to generate a rhythm by using rhythm maker widget and then import it inside another widget like phrase maker. I would like to streamline this process by including the functionality of editing rhythms inside phrase maker and in the case of musical keyboard the user can not even import rhythms so I will like to propose a function where the widget will be able to calculate the duration for which a key is pressed and the functionality of playing keyboard with the help of keyboard.
The project tools-golang is a set of packages in Golang intended to assist the programmers of Go language to work with SPDX files.
Presently, the tools-golang only handles SPDX files which are in tag-value format. It should also be capable of reading, writing and modifying SPDX files in RDF format, as an RDF format is also officially defined by the SPDX spec. Thus, the primary objective of this project is to add support for the official RDF format. If time permits, support for other formats like XML, YAML and JSON can be incorporated as well. Also, enabling compatibility for parsing and generation of documents under pre-2.1 versions of the SPDX specs is another aim of of this project.
The Apache Gora™ an in-memory data model and persistence for big data. Gora provides a generic API to work with different datastores. Data storing, data persisting and querying can be done via Gora APIs on these data stores. Apart from data stores gora provides support for mapReduce, Apache Spark, Apache Pig and Apache Flink. On the other hand, Hazelcast jet is an emerging distributed computing engine which competes shoulder to shoulder with Apache spark and others. So this project is to add Hazelcast Jet execution engine support for Apache Gora.
antiSmash run multiple instances of the submission page, depending on whether people want to analyse bacterial or fungal sequences, or run the production or development version. Those sites are very similar, but differ slightly in what features are offered. Currently this causes a lot of code duplication.
This summer I will develop a library that will contain common components from bacterial-ui and fungal-ui repositories. The library will act as common repository not only for these two projects but to other future projects related to antiSmash and beyond.
The goal of Neurolab project is to create an easy to use open hardware measurement headset device for brain waves that can be plugged into an Android smartphone. Our brains communicate through neurotransmitters and their activity emits electricity. The neuroheadset measures that electricity on the skin of the forehead and the software processes the signal and translate it into a visual or auditory representation. The data collected can be analyzed to identify mental health, stress, relaxation and even diseases like Alzheimer. A desktop version of the app already exists.
Neurolab Android app, in its infant stage, is a reimplementation of the Neurolab desktop prototype. In my proposal, I have targeted the systematic porting of the desktop application to the Android version using the topological sorting of a class dependency graph. Particularly, my emphasis lies in the implementation of Neuro-Visual Feedback and Neuro-Audio Feedback modules as they form the most concrete(building-blocks) features of the application. Apart from this, I have also focussed on beautification of the application and enhancing the user experience through intuitive & engaging UI's.
GeoPandas is an open source project to make working with geospatial data in python easier by providing operations on geospatial data using Pandas and a high-level interface to numerous geometries to shapely. Leveraging Pandas and core vector geospatial libraries, Python GeoPandas enormously simplifies the use of vector geospatial data. Lately, the visualization interface in GeoPandas is at present on the harsh side; although it helps in plotting and visualizing the data based on maps which are conflicting as far as supporting matplotlib features, for example, color mapping and color bars.
Hence, I would like to fix the issues based on refinement and extension of the matplotlib interface (e.g. fixing support for color mapping, adding custom legend markers for analytics, support choropleth with missing values) so that GeoPandas provide an exploratory visualization of geospatial data using Matplotlib.
coala plugin for JetBrains IDEs that supports linting of files using coala right within the IDE. The plugin would provide a unified experience for the developer to keep their code files clean and readable, without ever leaving the IDE, and hindering the development workflow.
OWASP Seraphimdroid has previously applied a system, based on permissions, which is able to distinguish malicious apps from non-malicious. But it still has some false positive errors, like foodpanda app. In order to improve OWASP Seraphimdroid’s performance, we would like to learn from other outputs (like network, CPU, buttery and memory usage, system call logs) can monitor about application whether it can be malicious.
I would to create a reference library capable to interact bidirectionally with RS and outside it, engaging people to create apps, bots, and other stuff on the RS network. Making easy way to import/export data from Retroshare network.
In classic CUPS-based printing environment, the PPD files and print filters have to be put into standardized directories of the CUPS installation. This method works well in standard RPM or Debian packages. If the CUPS environment is provided in a sandboxed package, adding files to the CUPS installation is not possible. The solution, suggested by Michael Sweet, are Printer Applications. Printer Applications are simple daemons which emulate a driverless IPP network printer on localhost, do the conversion of the print jobs into the printer's format, and send the print job to the printer.
In this project, I aim to implement a universal printer application framework which can be packaged with print filters and PPDs to make up a Printer Application.
Scalable benchmark for the Swift Standard Library
Written using Swift and Xcode. This benchmark is used for the standard library engineers to monitor and track the changes before the algorithms are deployed live.
Dimensionality reduction techniques are useful methods that allow us to gain crucial insights about the given dataset. Unfortunately, such methods become computationally intensive when dealing with large scale dataset. To deal with complexity issues, one possible approach is to implement algorithms in a distributed fashion. Ideally, users of LiberTEM can benefit from implementation of these algorithms that they can run through a simple pipeline called User-defined functions, which allow the users to run functions with their desired functionality without having to worry about parallelization, which is done under the hood by LiberTEM. Therefore, my project will be concerning both the distributed implementation of a dimensionality reduction method as well as improving on the User-defined functions framework.
Despite the ubiquity of text in computing applications, rendering high quality text using the GPU remains a challenging problem (see section titled Rendering here for a brief history). A promising solution was recently proposed by Patrick Walton in the form of a Rust library, Pathfinder.
Integrating Pathfinder with piet will enable druid to provide high performance, high quality graphics (using piet), and thus eventually help satisfy Xi’s goal of providing a beautiful yet performant interface, on every platform.
This proposal is to improvise and solve e2e testing issues and enable additional features to Azure cloud provider.
Upgrade Rails to 6.0.0 and patch all 169 Debian packages that depend on Rails to be compatible with Rails 6.0.0.
Rails (Ruby on Rails) is one of the most popular web frameworks in the world. Rails is distributed in several distributions, and Debian is one of them.
Almost of web applications made with Rails are distributed using RubyGems. RubyGems is a package manager in Ruby, and is designed to distribute the web libraries multiple versions at the same time. This can make the Rails applications maintain with multiple version of Rails frameworks. However, the package management system in Debian is designed to deliver one version at a time, which cause mangling the dependency relationship.
Currently, the Rails version in Debian is 5.2.2. All of the Ruby applications and libraries in Debian are dependent on Rails 5.2.2. This proposal proposes to upgrade Rails to 6.0.0 and patch all 169 Debian packages that depend on Rails to be compatible with Rails 6.0.0.
Download and read the Full Final Proposal (PDF, 280 KB).
Fact-Bounty is a platform to view news items from low-credible sources and seeks the opinion from the community regarding the truthfulness of the facts provided by these items. Currently only a simple interface is implemented to view and vote news items. Below are the main features that will be developed and integrated to the front-end.
LibreMesh has support for monitoring via LibreMap agent since years, nowadays NetJSON is emerging as common format to exchange networking information between community networks actors, since the inception of NetJSON. LibreMesh community has been supportive of this effort and now that NetJSON gain momentum we think is the right moment to implement NetJSON based monitoring as a module of LibreMesh firmware.
The overall goal of this project is to implement a machine-readable graph legend interface and extend the API to add, modify and remove legend data for a graph using graphspace-python package. This will enable users to interact with the legend in an easier way using GUI and graphspace_python client library.
Nonparametric Bayesian models are a way of getting very flexible models. They are most powerful when your prior adequately captures your beliefs. Inflexible models often yield unreasonable inferences. Nonparametric models can automatically infer an adequate model size/complexity from the data, without needing to do explicit Bayesian model comparison. [Ghahramani, Z. (2009)] Dirichlet Processes (DPs) are a class of Bayesian Nonparametric models which are useful in several domains ranging from Topic Modelling to Brain Image Segmentation, and for numerous purposes including Density Estimation and Semi-parametric Modelling. A Dirichlet Process is an infinitely decimated dirichlet distribution that can be used to set priors on unknown distributions. It is a distribution over distributions. A cornerstone of modern Bayesian Nonparametrics, the DP module is an essential addition to any probabilistic machine learning library such as PyMC3. It’s currently possible to implement (truncated) Dirichlet processes in PyMC3, but the process is quite manual and involved. This project aims at implementing different representations of DPs and inference algorithms specially tailored for DPs.
Single cell RNA and DNA data generation using the emerging 10xGenomics techonology is now becoming a necessary part of genomics approaches to decipher disease and biological understanding. With the massive increase of such data generated, we developed and tested a standard operational procedure to analyze such datasets.
The goal of the project is to implement a new pipeline object (two protocols RNA and DNA) and a shared complementary library into the GenPipes analysis pipeline set while following the established GenPipes framework.
In this project we aim to incorporate Intel cache allocation software package (intel-cmt-cat) in BenchExec for cache allocation on a per core basis for parallel benchmarking and add pqos cli monitoring options along with OS and MSR interface support for linux systems.
A new package about the computation of some topological invariants on surfaces is under development in CGAL. The goal of this project is to add a new method in this package in order to compute a shortest non-contractible cycle on a given surface. The goal is to implement the algorithm described in this paper (Algorithms for the edge-width of an embedded graph, Sergio Cabello, Éric Colin de Verdière, Francis Lazarus, Computational Geometry, Volume 45, Issues 5–6, 2012, Pages 215-224).
At the heart of OpenWISP is the ability to create templates which is the means by which admins define configurations which will be used on their devices. Most at times, the templates which most admins can create for a particular device (for say) are almost the same and may differ only in variables needed. But presently, each admin still spends some precious time to create this template on their OpenWISP instance. Equally, newcomers who may not be familiar with the notion of creating templates on an OpenWISP instance find it difficult to start using OpenWISP to get their desired and maybe trivial configurations on their devices. Thus, this project aims to make it possible for OpenWISP instances to share their templates both publicly and secretly and equally allow templates to be collected in a template library
Although currently Robotics Academy aims to be fully ROS-friendly, there are still some exercises on mobile robots and autonomous vehicles which are still based on the non-ROS infrastructure or incomplete. Obviously, these exercises have to be moved to the ROS and comply with Robotics Academy standards, so it will be easier for learners to start working on exercises rather than trying to tune and setup different environments. Currently, there is an unfinished exercise on Amazon Warehouse Robots which is needed to be finalized. For that purpose the infrastructure of the exercise, robot controller, ROS messaging and communication and several GUIs have to be implemented and updated. By doing this, I aim to finish Amazon Warehouse exercises for single and fleet of robots operation and integrate them into the Robotics Academy.
An application for users to exchange, visualize through extracted biological data.
Project Mentor Augustin Luna
The project aims to develop the application with the main objectives of the application being:
Using modern technologies to provide a fast seamless experience.
Basic Project Phases
Documenting my work will be a simultaneous work alongside the above phases so that it is easy for other contributors
Thank You :)
coala is a linting and code fixing tool with support for many languages. Having a configuration file is essential to make full use of coala. Having a standardised configuration format like TOML helps. coala currently has an INI style configuration. Also, INI style configuration has certain limitations when it comes to custom sub-level parsing. The LineParser and ConfParser becomes complex and error-prone. This project will allow developers to quickly implement new features and will allow users to write configuration files in a format that they are already familiar with.
This project would focus on :
In RubyGems.org website, we can search for any kinds of gems you want. However, most programmers use google or any other websites to look for gems which they want to use. So, who is using RubyGems.org? The answer is a beginner and intermediate for Ruby, who are not familiar with CUI but familiar with GUI. Therefore, I propose to add autocomplete and some functions which help GUI familiar people.
git-bug is a distributed bug tracker imbedded in git.
Currently the project provides a large set of tools and functionalities (command line, web UI, and terminal UI) to deal with issues locally and supports Github and launchpad issues imports.
During the GSoC program this project aim mainly to:
Bassa is not containerized completely and is not easy to make production deployments in a very convenient way. There are some scripts and docker files implemented for some modules of Bassa project but there is a lot of work to be done for a smooth onboarding of new developers and easy production level Bassa deployment.
The aim of this project would be to create a GHC plugin that allows instructors to define requirements for the code within a project and automatically validate them. This plugin would have applications for users in a broad range of educational contexts, such as university professors teaching functional programming classes using Haskell, community members writing tutorials for their libraries, and authors who are writing textbooks with exercises.
It would offer validation of requirements from basic static analysis like whitelisting/blacklisting of standard library symbols and checks for line length and warnings through to more advanced runtime testing involving checking that functions in the given code pass specified unit tests.
The structure of the project would be to create a domain-specific language for expressing the requirements, then to write the plugin, which would enforce the specified requirements. This would happen by injecting new code into the program to check for a specific “validation mode”, which replaces the original behaviour of the program with any necessary runtime validation, and replaces its output with a report about how well the code follows the requirements.
MusicBrainz for Android was first created in 2010-11 as a part of GSoC. Since 2015, no updates have been made to the project. The app is currently broken as the it does not adhere to android ecosystem. I feel a mobile app is a necessity for an organization like MusicBrainz. A mobile app will help increase user engagement and open up MusicBrainz to a plethora of new users.
Regarding the development of an API for the qaul.net rewrite along with a configuration and message management system.
Implementation of JPEG 1992 lossless encoder core in the FPGA.
Wiki Edu Dashboard App is an Android client for the Wiki Education Dashboard API to support managing programs, including edit-a-thons, education programs, and other events. The app allows users to access Wiki Education Dashboard and Programs & Events Dashboard from their Android phone. The Android app would be handy, especially for use in bandwidth-constrained situations and during edit-a-thons.
The app will contain the following features:
Currently, LibreOffice does not support styles for charts that can help to quickly insert formatted charts following a certain theme throughout the sheet. Competitions like MS Office since the 2013 version and also statistical software like Stata support chart styles. Right now a lot of productivity is lost on manually adjusting the chart properties for each chart or using the awkward copy and paste hack ( with changing the data linked to the new chart ). With this feature implemented the user can focus on the data rather than how the data looks.
Currently the plan is to first implement styles at a logical level in the chart2 module, followed by allowing import and export of chart style into and from a locally saved file. Also, a prototype chart style selection list is to be made in the Chart Deck of the sidebar.
Interoperability Model Port the gen-api-models tool to OpenAPI v3
I spoke with Mentor/managers on Slack (Leonardo Favario and Roberto Polli) Which I thank for giving me all the answers concerning the project. They told me that they must be maintained: I update Kong(Open Source API Gateway) Rate Limiting and edit it to fit the current system, I add a lot of test code(unit test, etc..) and update documentation. API rest maintenance and adapt them to the latest versions of OpenAPI v3. I'm in contact with mentors and managers, I hear them at least 3/4 times a week. Every time they update me on something about the project, I report it
https://docs.google.com/document/d/1Navvdxo6QONWg6zz7LaXka4iDZt6ERTIaisFCgiQbQA/edit?usp=sharing
The idea is to create a small abstraction layer mainly for non-tech contributors while developing the Mə̀dʉ̂mbɑ̀ - Français language pair.
The OpenRoberta is a learning platform. It uses a graphical programming language (based on blockly) and has code generators/loader for many robots and embedded systems used in education. Currently, OpenRoberta provides a few built-in pictures and melodies/sounds that can be used in programs. These are usually selected from a drop-down in the related block. More advanced users would like to upload their own media and use in the program. This project idea is to design a way to store assets in the project, ensure uploaded data is converted into a suitable format for the robot, design user interface form to upload media files, add a "play sound" and "show custom picture" block and provide a way to record sounds using the microphone.
This project aims to improvise ListView in App Inventor. It was one of the most demanded improvements from users. Idea - Adding custom layouts to ListView Expected Outcome - 1) Users will be able to select one of the pre-defined ListView layouts and use it in their applications. 2) Users will be able to create any custom row of ListView, if it is not present in pre-defined set of layouts.
Since when it was created in 2005, the git rebase command has been implemented using shell scripts that are calling other git commands. Commands like git format-patch to create a patch series for some commits, and then git am to apply the patch series on top of a different commit in case of regular rebase and the interactive rebase calls git cherry-pick repeatedly for the same.
git-sequencer executed a sequence of git instructions to <HEAD> or <branch> and the sequence was given by a <file> or through stdin. The git-sequencer wants to become the common backend for git-am, git-rebase and other git commands, so as to improve performance, since then it eliminated the need to spawn a new process.
As of now, there are still some inconsistencies among these commands, e.g., there is no --skip flag in git-cherry-pick while one exists for git-rebase. This project aims to remove inconsistencies in how the command line options are handled.
We would like to propose a improvement on current Kubernetes topology manager to become aware of generic hardware device topology at node level, so Deep Learning training can be improved significantly due to data inter-connection between NVIDIA GPU devices on the node.
Develop Interfaces for Credit Risk Assessment Scorecard. Develop UI interfaces, API layer and DB for setting up Features, their relation and criteria for risk assessment of a potential/existing loan. The work includes developing UI, APIs for CRUD in line with MifosX practices and DB layer
For tree ensembles and other ML models SHAP feature importances emerged as a popular explanation and debugging method. Implementing SHAP explanations in eli5 in an unified interface, by wrapping a third-party library, and adding the relevant plots, will help gain insight into the complex tree models that will provide a stronger reasoning to their structure and usage.
The goal of this project is to develop plugins for Ghidra to assist in firmware reverse engineering. My planned implementation of this project consists of three parts: a file system loader for firmware images (UEFI firmware volumes and coreboot CBFS), a loader for PCI option ROMs (parsing structures and resolving the entry point), and scripts to assist with UEFI binary reversing (importing common UEFI types/structures/GUIDs from EDK2 and other sources).
At CERN, the data from LHC collisions requires complex data types and functions to be processed. As a solution, the awkward-array library makes python implementation of these data types possible in a way which is portable to GPUs. Making this library work through C++ code would give it the ability to have precompiled C++ routines for faster implementation (past the initial load time) and later compatibility with vectorization primitives from C++ libraries.
As part of this project, I will be working on expanding the awkward-array python library to include C++ compatible functions using pybind11. This will entail creating compiled C++ code in the form of python extension modules to be an expansion of the already-existing package. The classes I will be writing will be C++/python versions of the classes which have been already written in the main directory of the awkward-array project.
This project will involve implementing nodes for the KNIME Analytics Platform that would aid evaluating clustering performance, and detecting outliers, in addition to other clustering algorithms.
This project will be split into several substages:
• Implementation of clustering analysis metrics: Silhouette Coefficient and Davies-Bouldin Index. These will be the first nodes to be implemented and I will need some time to get familiar with KNIME Node development, thus this will take a bit more time. Estimated time necessary: 2-3 weeks.
• Implementation of Fast-MCD for outlier detection. Estimated time necessary: 2 weeks.
• Implementation of an interactive interface for analyzing clustering performance. Estimated time necessary: 3 weeks.
• Implementation of at least one more clustering algorithm: K-means– and/or COD. More will be implemented if spare time is left. Estimated time necessary: 3 weeks.
FOSSology is a open source license compliance software system and toolkit. As a toolkit a user can run license, copyright and export control scans from the command line. The FOSSology system is a combination of agent runs in series to perform specific task. The proposed project is to create another agent for software heritage for searching through the software heritage archive. User using FOSSology through software heritage can see if the current file (or the files of an upload) is published in Software Heritage. With this functionality a user can see if the file has been published in a different open source before or if this file is really part of the distribution.
DeltaCode is a tool to compare and report scan differences. It takes JSON files as an input which is the output of ScanCode-toolkit as well. When comparing files, it only uses the exact comparison. By exact comparison, I mean it compares the hash value of the files. The output of DeltaCode is a JSON/CSV file which includes the details of the scan such as delta score, delta count, etc. The goal of this project is to improve the usefulness of the delta by also finding files that are mostly the same (e.g. quasi or near duplicates) vs. files that are completely different. After this project, DeltaCode would be able to detect similar files in a directory approximately.
The project is to develop a tool to track the contributions of developers across the platforms like Gerrit, Phabricator and GitHub using the corresponding APIs. The tool will be hosted on Toolforge. This tool will be used by event organizers to track the developer's activity across various platforms.
The syslog-ng-debun tool collects and saves information about syslog-ng OSE installation, making troubleshooting easier, especially if anyone is concerned about syslog-ng OSE related issues. The goal of this project is to improve the current debun tool by adding new features and developing tests.
Homology refers to the shared ancestry between a pair of structures, organisms or genes, in different taxa. Currently, homology types are decided on the basis of phylogenetic trees. They are later checked, for quality-control measures, by a whole genome-alignment score. The aim of this project is to use the methods of Deep Learning to predict the homology calls and check the homologies predicted by the above methods.
A web application for contesting online AI challenge. The front end has been improved for better user experience and performance.
Development of 16 modules for the DONUT platform for better UI/UX. To simplify the configuration, these predefined modules will have the most common use cases with different options for modules and settings while setting up this environment. These modules will be more focusing on providing user a smooth experience while using DONUT platform.
RAPPS is small GUI program that allows users to download multiple programs without hassle. It has its own application list that can be expanded by the user and the community.
Facial feature detection and tracking is a high value area of computer vision since humans are interested in what humans are paying attention to, feeling, and enhancing face pictures in selfies etc. At this point, we feel this should just become a standard "built in" ability that computer vision users can just call/rely on. OpenCV already has some code available on for Facial Landmark detection, see Tutorial on Facial Landmark Detector API and Tutorials for face module, but much progress has been made that we want to make available.
Expected Outcomes:
Among the projects followed by the Developers Italia community, there are few Kits to help web development phase, that are based on a shared design system. Currently, the foundation for many kits has been laid. Some of these are in the early stages while others are stable. The designed Kit needs to cater to a broad spectrum of users: from the expert designers to first-time visitors. The goal of this project will be to complete the UI Kit for React which is based on Bootstrap Italia and React Storybook.
There are many internet pages providing data sets for educational and academic purposes concerning various fields of science, and not only (astrophysics, statistics, medicine, etc.). Some tools used in the scientific field provide some "wrappers" for such online sources and allow the user to easily investigate these data sets and work with them in all kinds of applications, whilst the technical details and methodology like the fetching of data from the server and parsing are done completely transparent for the user. The user doesn’t even know what happens in the “background”. The goal of this project is to add similar functionality to LabPlot. This would make LabPlot more fit for educational purposes, students and teachers could use LabPlot for visualizing and analyzing data connected to the currently studied field. And also could bring LabPlot into the life of the average student.
The reforestation is a problem that affects a lot of countries nowadays, due to this project tries to be a system able to automatically reforestation using three popular technologies: Liquid Galaxy (as interface that the user will interact with), Neuronal Networks(to detect the burnt zone) and drones (to do the reforestation).
The project goal is to establish a test infrastructure which supports unit testing, UI testing and Integration testing which will test the new features and code changes done in the project before getting merged in the main repo. The goal is to confirm that no new code gets merged unless it passes all the tests that will be run at build time i.e. the new code works as designed and does not cause any regression issues by breaking other functionality/tests. The test infrastructure will be designed in a way that it will tell the developer what method or function or Which piece of code is causing the error or tests to fail. So It will be easy to find which portion of code is causing an error and the developer can fix that. A Proper test automation consists of the generation of test coverage reports as it helps in identifying the areas of missing test coverage. So the generation of test reports using Codecov is also an important goal of this project.
I aim to improve the existing diff handling capability of coala so that it is possible for bears to offer multiple fixes for an issue. I also plan to implement interactive diff behavior where the user input is used to generate interactive diffs using appropriate placeholders in the code. For achieving this I plan to have a generic approach for bears to pass their own applicable actions which can then be used to write a generic bear which can offer multiple patches. This can be followed by Implementation for templated patches and interactive diff behavior along with writing tests supporting these new functionalities. Finally thorough testing and exception handling should be done to fix bugs and designing alternate approaches if necessary(e.g. resolve conflicts of multiple bears on a single issue by git like conflict markers through IDE). I would be updating the documentation throughout the project. An optional task would be implementing output formats for the enhanced features and their processing functions being implemented in the Linter class if time permits.
The rendering pipeline of Terasology is represented by a direct acyclic graph (DAG) consisting of sets of nodes and edges. Nodes being (sets of) features, bearing frame-buffers and shaders. The expected semantics behind the graph's edges is communicating inter-node dependencies, which is currently not possible. The current state only defines ordering of node calls while relying on the dependencies to be hardcoded right insided the nodes themselves.
This project sets as its goal, among others, to provide a code-base for expressing inter-node dependencies and integrating this into the current pipeline, introducing modularity.
We plan to create a package for parallel coordinate plots using ggplot2 based on the existing methods. We want it to make use of larger flexibility of ggplot2 and apply the modular approach. Detailed description can be found on the GSOC-R-2019 project list page. Here I explored some of my thoughts about how to achieve the function in my proposal. I want to enhance the geom_line function of ggplot2 to make the process of making parallel coordinate plot more straight forward and avoid a lot of detours made by previous functions. And my attempt on creating a package of parallel coordinate plot is also provided. And some of my information and timeline about the project can also be found.
Brains on Board (http://brainsonboard.co.uk) is an inter-university project with the aim of developing an autonomous flying robot with the learning abilities of a honeybee, with all computation performed on board. Part of the project involves developing a lightweight C++ library for running neural simulations in real time on small robots (https://github.com/BrainsOnBoard/bob_robotics). The BoB robotics framework currently lacks the following:
This project aims to integrate BoB robotics with a more sophisticated 3D simulation package i.e. Gazebo. The main contributions of this project shall be:
The project aims to create a platform for hosting and sharing Vega and Vega-Lite visualizations. It will facilitate a user to save, fork and publish any visualization on the web. It is designed keeping in mind the user-benefits and covers everything from back-end to front-end with few new features. It will be integrated into the editor itself so that the user can conveniently make and share the visualization from the same place. This lowers the barrier to entry into the vega ecosystem.
Deployment is the last stage in the application development process and before that, any application undergoes a comprehensive testing process to validate that the system meets its functional and non-functional requirements. An application comprises several functions, classes, procedures which are referred to a unit. The objective in unit testing is to isolate each unit of the system and validate its precision by identifying and fixing the defects and bugs and benefit of unit testing is that issues can be identified on earlier stages. Chapel currently lacks unit testing, which harms the development of the platform and makes easier to reintroduce past bugs while developing. This project is a proposal to add a set of unit test classes to Chapel.
Ambiguous patterns are ones that more than one transfer rule could be applied to. Apertium resolves this ambiguity by applying the left-to-right longest match (LRLM) rule, and that is not adequate with all the word/s that follow that pattern/s. To improve this resolution, a new module was introduced to assign weights to these transfer rules for the word/s that follow the ambiguous pattern, and this is done by training a corpus to generate maximum entropy models (models with weighted rules), then these models are used to choose the best (highest weighted) transfer rule to apply.
The weighted transfer rules module was built to apply only chunker transfer rules (patterns of words). And this project is to improve it by modify some of the methods used and then to extend it to be applied to interchunk and postchunk transfer rules (patterns of chunks) too.
This project aims to improve the capabilities of the assumptions handling sub-system in SymPy. Currently, SymPy has two such systems (the old and the new assumptions). The Old Assumptions is time-tested and fast while New Assumptions is slow but provide better logical inferences. This project targets to improve the new system in SymPy and make it faster.
This project would aim to provide the ability to de-TOAST a fully TOAST'd and compressed field using an iterator, and then update the appropriate parts of the code to use the iterator where possible instead of de-TOAST'ing and de-compressing the entire value. Examples where this can be helpful include using substring() from the beginning of the value, or doing a pattern or substring match.
Nowadays poliastro includes different interplanetary capabilities, such us retrieving the user any body available in JPL database, Cowell's propagation under different perturbations or even plotting porkchops. However more features could be developed for Earth such us:
A new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a “surrogate” objective function using stochastic gradient ascent.
P5 Math In Motion would be a library that renders interactive math notation inside p5.js projects with the help of Katex, an open-source library for rendering math notation on the web.
The current display system used at CDLI requires that a user reads a text to absorb visual and text information simultaneously, and to interpret the mapping between them, since image and transliteration are shown side by side (example: https://cdli.ucla.edu/P315663). Experts in cuneiform studies are usually able to discern this mapping only for their areas of expertise; non-experts and informal learners, on the other hand, have no direct means of affiliating image and annotation content. With the advent of machine learning techniques, this text-image hyperlink concern can now be addressed. This would involve building models using state of the art computer vision techniques specifically trained over a large dataset with annotated ground truth to understand the underlying structure in the tablet images so as to optimally perform image segmentation, character detection and recognition. The goal of the project involves developing machine learning models:
The goal of this project is to build a test environment so that we can measure the impact of any changes to the code base and tune the memory footprint and CPU consumption of the Music Blocks in the web browser. Currently, Music Blocks uses Tune.js for audio and Easel.js for canvas rendering. On under-powered computers, when the memory consumption increases beyond a certain limit, Music Blocks doesn't work optimally. Further, timing errors in audio are annoying and the Tone.js synths get crackly when the CPU is maxed out.
Hence this project involves following steps to tackle the above-mentioned problems: 1) building a test environment so that we can measure the impact of any changes to the code base; 2) characterizing the problem, e.g., what resources are currently being consumed? 3) identifying potential places for improvement (including finding memory leaks, etc.)
This GSoC would be about adding testing to VLC-Android, specifically testing UI Scenarios, adding tests to the medialibrary integration, and adding tests to VLC-Android concerning the videoplayer which often sees regressions, and player inputs.
As direct as possible, the goal with this project is to make more of Git’s codebase thread-safe, so that we can improve parallelism in various commands. The motivation behind this are the complaints from developers experiencing slow Git commands when working with large repositories, such as chromium and Android. And since nowadays, most personal computers have multi-core CPUs, it is a natural step trying to improve parallel support so that we can better use the available resources.
With this in mind, pack access code is a good target for improvement, since it’s used by many Git commands (e.g., checkout, grep, blame, diff, log, etc.). This section of the codebase is still sequential and has many global states, which should be made local, removed or protected before we can work to improve parallelism. That's what this project is targeting.
My main objective of the proposed project would be to improve Zulip’s electron-based client application by adding a test suite, bug fixing and adding new features, listed below. I would also intend to attend to new issues, which may get added in the future.
This project will be about developing the sample code in two or more programming languages to demonstrate the use of MediaWiki Action API modules. In this project, I will design and embed a tabbed window on the API pages, write a code generator and document the sample code on API modules pages and also create a demo app.
Portable Operating System Interface (POSIX), is a set of IEEE standards that defines an interface between programs and operating systems for maintaining compatibility. By designing their programs to conform to POSIX, developers have some assurance that their software can be easily ported to POSIX-compliant operating systems. This includes most varieties of UNIX. POSIX also defines the command line shells and utility interfaces, for software compatibility with variants of Unix and other operating systems.
Ordinary differential equations are widely used for modeling biological networks and processes of kinetics. The idea of project is to create a web application that allows a user to submit an SBML model and select methods of analysis and receive results. The different kind of analysis methods that will be available for use are: Steady State Analysis, Parameter Fitting, Simulation.
In this project, we will bring together MNE-Python’s standard neurophysiological data processing functions with the Brain Imaging Data Structure (BIDS) to create a fully automated processing pipeline. As an input, the pipeline will require only a minimal configuration file and a BIDS dataset. As an output, the pipeline will provide a path to saved derivative files as well as a detailed report on all processing steps.
Broadly I propose to implement, test and document the following features in Social Street Smart:
(1) Hate Speech Detection using Deep Learning and maintaining dictionary of profanity words. (2) Click-Bait detection using Deep Learning . (3) Origin Detection of news/rumor using fact checking.
This project provides a framework for reproducible wireless resource allocation algorithm testing. This will ease the benchmarking and the development of new and well known algorithms.
This project aims to build an Automated Speech Recognition engine for Indian English and Hindi using Deep Learning( RNN-CTC, TDNN, LDA-MLLT, CNN ). The proposed Deep Neural Networks will be trained on 5000+ hours of training data( Audio + manually aligned transcripts ) . The secondary goal of this project is to implement Speaker Diarization to generate speaker separated transcripts. The end objective is to deploy all the modules within a single pipeline to help understand Television news with more precision and accuracy.
I would be privileged to help in developing the missing features in nftables which are present in iptables. Features such as "-m time support" to allow data packets within certain time limit only , feature for matching the operating system version, and various tests for the existing features will be developed. I would very highly value any role that the netfilter team may give me.
An update to the public suffix list (PSL) is shipped bundled with Firefox releases. The current system creates the DAFSA binary file at build time. The list has to be manually updated with every new release and cannot be updated for people running old versions of Firefox. I will be working to create an update system using the remote-settings API in Firefox to deliver an updated list to the client periodically, the period is yet to be determined. The list will be published on the Mozilla remote-settings servers and be shipped as a processed C++ hex array. This file format allows for shipping the list quickly over the network as it is orders of magnitude smaller in size an can be parsed much faster than raw text.
My primary targets for the summer are as follows:
I've covered the above four points in detail inside my proposal.
Bring access of Rocket.Chat into the world of 100 million+ Alexa enabled devices. The project's aim is to create innovative, high valued user experiences to the Alexa ecosystem - powered by open source Rocket.Chat. An Alexa skill will be developed for Rocket.Chat with various new features and support for upcoming features along with improved Voice User Interface using the latest SDKs made available.
In 2017's GSoC, shell autocompletion script was added to clang for bash users by modifying clang internals. now this script will be exported to zsh shell and missing flags need to be added for better autocompletion. Also llvm optimizer [opt] tool is going to get auto-completion feature.
The objectives of this project are to find optimizations in MOLTO-IT trying to reach the best performance due to the hard numerical process that this project does and Create an API using this Matlab module as a service where the users will send a POST request with the necessary inputs, and it will return the necessary information. Such as parameters of orbits, time, fuel consumed, or even graphs, etc. This will allow the team to create a better and more attractive graphical interface without losing the Matlab efficiency. Creating in this way, the possibility to use this service wherever you want such as mobile, web and desktop applications.
Imagine plugging a TV stream into AI-powered system and later in the evening reading everything what happened during the day. Or think about people with vision problems that could simply “hear” what is happening in front of them? All this “magic” is no more “magic” as AI technologies are getting better each day.
And this is only a subset of possible applications of this project! Poor Man’s Rekognition is a first step towards open-source alternative to Amazon Rekognition and other similar proprietary services.
The goal of this project is to extend the existing Nominatim Wikipedia extraction scripts to take into account Wikidata. The Wikidata project contains structured data about items, and statements about the relationships between items. In recent years, OpenStreetMap has gained a large number of Wikidata tags, and the information from the Wikidata database should improve the importance rankings and search results from Nominatim. To accomplish this objective, it will be necessary to create a script that can process a Wikidata dump and extract the information that would be useful to Nominatim, then import the data it into the Nominatim database. Nominatim already uses Wikipedia links to improve search results, and where Wikidata is available it can help determine the correct Wikipedia link that should be used whenever a Wikipedia link does not exist for an OSM object. This can be done both by directly matching objects to their Wikipedia links via Wikidata, and also by ensuring that links are made to the correct type of object and that links are not made to vague pages such as those that describe brands or other information that is not relevant to search result ranking.
This project is basically about adding regression capabilities to both the PHP backend (based on php-ml) and the Python backend (based on tensorflow) included in Moodle core. Thereby, expanding this API to support regression, so we can write models that estimate linear values instead of classes.
This project aims to create a plugin for other platforms to help Creative Commons users. Since freedom is given on choosing the platform and tools, I will create a browser extension to help users
This, along with an intuitive and human-friendly UI, would result in more personal interaction of users around the world with Creative Commons and its mission of maximizing digital creativity, sharing and innovation.
Main goal: have the AD7292 driver in the Linux kernel tree.
Secondary goal: develop a series of tutorial posts for newcomers.
The following summarizes the deliverables I want to provide.
Develop the AD7292 driver incrementally adding small pieces of code every week and gathering feedback from the community during this process. This deliverable will be the sum of several minor contributions along the entire GSoC period.
A first blog post describing the main blocks of code that should compose de AD7292 driver. This post will aim to elucidate for the basic structure of an IIO driver comprehensively for newcomers.
Approach how an IIO driver communicates through SPI. What kind of API does IIO have to help to establish communication with an SPI device?
Describe the driver’s probe, read_raw, and write_raw functions. I may want to talk about the IIO channels, the ABI, the support for devicetree and other stuff so this can give origin to a tutorial composed by a series of posts.
Final post about the operation of the driver as a whole.
A feature to the lesson player that allows students to explain how they arrived at a (wrong) answer. The aim of this feature is to encourage reflection on the student's part, as well as provide (anonymized) information to creators about student misconceptions, so that the creator can improve Oppia’s feedback for future students.
When CoreDNS serves DNS queries publicly or inside Kubernetes clusters, the source IP of the incoming DNS query is an important identity. For security considerations, only certain queries (from specific source-IP or CIDR block) should be allowed to prevent the server from being attacked. The goal of this project is to support a firewall-like source-IP based block/allow mechanism for CoreDNS. With our plugin (named as firewall) enabled, users are able to define ACLs for any DNS queries, i.e. allowing authorized queries to recurse or blocking unauthorized queries towards protected DNS zones.
The Virtual RobotX (VRX) competition is an international, university-level competition aimed at developing a vehicle in a Gazebo-based simulation environment. Tasks for this competition have been derived from RoboNation’s Maritime RobotX Challenge.
I will be building a GUI plugin for Virtual RobotX simulator. It will overlay over the Gazebo window to show mission statistics, and progress.
Processing Language Server focuses on creating a Language Server Protocol (LSP) implementation for Processing Programming Language. PDE is currently built using Java and using custom components of the Swing Framework, which is ~ deprecated. The long term goal of Processing is to replace this with a JS based IDE to bring in more contributors and to make building UIs simple. While planning on building such IDE, LSP is of significant importance for any language that the IDE relays on. Since Processing is the targeted Programming language, it’s quite important to build a Language Server Protocol for the same. This shall act as a benchmark for all the crucial activities of the IDE such as auto-completion, go-to-definition, hover-insights and so on. LSP will also help in easy and seamless integration of the above functionalities in any editor such as Atom, VScode, etc.
Goal of this project is to use coala's json output format, convert it to various test result formats and then integrate these test format reports to various tools like CricleCI, Appveyor, Jenkins and Phabricator. Result format inconsistencies has been a problem for a long time but the benefit of converting the static analysis results into a test result format can provide tight integration with various systems. This project also aims on provide our upstream linters with some of these test formats using our own Result Reporter Tool library and help developers in extending this project to other formats and uses in the future.
To create a program which acts as a Wayland proxy (client + server pair), and which can forward Wayland protocol information and local shared-memory updates over a socket. The end result should enable a workflow similar to network transparency with X11 and ssh.
Apache Airavata currently does not have a scheduler that intelligently throttles jobs before submitting to compute resources (clusters) and directly relays to the queuing and scheduling policy of the clusters that ensure fair use of the resources. In this context there is much that can be improved by implementing an internal scheduler for Airavata. This scheduler will help in (1) throttling jobs before submitting to compute resources, (2) be aware of the load on various clusters and intelligently dispatch series of user jobs to multiple cluster thereby increasing throughput and (3) and making it easy and fair for multiple users to use a single community account.
Hello, my name is Christos Chronis postgraduate student of Harokopio University of Athens in Informatics and Telematics department. I have a great experience in programming, circuit design, 3d printing, and robotics. I think this project is highly appropriate for my skill set and interests. The goal, of a low-cost open-source robotic kit, is something I was thinking as a development project for a long time. Furthermore, the idea to contribute to the educator's community and make more people and students to get involved with robotics is extremely motivating for me. My proposal contains, cost analysis of the project, an indicative part list, a comparison with the Lego robotics kit, a detailed timeline, my draft robot design for how I imagine the robot kit and finally some photos from in-progress robot project for the team of robotics of the Harokopio University of Athens.
While Vert.x fully supports TypeScript, any library not written specifically for it would not provide any help for typings. This project would help generate the required TypeScript definition files from a jar/java files public APIs.
(This is one of the ideas given)
An implementation of a sub-gridding framework is proposed. Sub-gridding reduces the computational resources and solve time required for many problems in Ground Penetrating Radar (GPR). This is achieved by reducing the global spatial and temporal resolution of the finite difference grid whilst maintaining the resolution in local regions containing high dielectric strength materials or fine geometric detail. This feature is highly beneficial to those in the GPR community working on optimisation and inversion problems. An implementation of a sub-gridding framework is proposed such that gprMax users have access to this advanced modelling feature.
This Project goal is to design and develops an online course, to teach deep learning for students in the humanities and social sciences. The course will contain labs case studies from multimodal communication.
Benchmarks continuously strive to improve performance standards in order to stay relevant in the market and playing important role for having better customer loyalty, SEO ranking and more. Meanwhile there are various factors affecting performance, having high performant proxy in front of webservers is one of important which could be achieved by continuous performance measurements and improvements.
We add support for sampling arrows to aster models using the theory of curved exponential families.
The Wayback Machine archives billions of webpages, with vast numbers being added to the collection every day, including news sources. Though the crawling operations are quite successful, there is still work to be done to improve the quality of webpages archived by checking for “broken” or “bad” pages. I propose developing a methodology to assess the quality of news sources, both in terms of bias/factual reporting and in terms of technical viability such as geoblocking, paywall blocks, and CAPTCHAs. I will then use this methodology to create a tool that can automatically assess quality and take an appropriate action, perhaps preserving an error message and reason for failure in the Wayback Machine. This tool will ideally be integrated with the Wayback Machine’s web crawlers. I will also use my findings to develop recommendations for circumvention tactics, such as paying for subscriptions to important news sources or negotiating agreements with CAPTCHA services to allow the Internet Archive, as a public good, to bypass the CAPTCHA. The goal of the project is to develop a system to think about the quality of archived webpages and then create tools to automate quality assessment.
The DifferentialEquations.jl ecosystem has an extensive set of state-of-the-art methods for solving differential equations. By mixing native methods and wrapped methods under the same dispatch system, DifferentialEquations.jl serves both as a system to deploy and research the most modern efficient methodologies. While most of the basic methods have been developed and optimized, many newer methods need high performance implementations and real-world tests of their efficiency claims. In this project, students will be paired with current researchers in the discipline to get a handle on some of the latest techniques and build efficient implementations into the DiffEq ecosystem (OrdinaryDiffEq.jl, StochasticDiffEq.jl, DAEDiffEq.jl).
Nobody likes waiting for hours whilst blender is busy rendering. Cycle, one of the render engine of blender, is a heavy user of ray-tracing. The usage of ray-tracing require to build a BVH (Bounding Volume Hierarchy). The construction of such a tree is complicated. Currently blender use it's own BVH Builder.
Currently, Embree can be used only if rendering on CPU, and it requires that an optional flag is set at compilation time (which is not enabled for pre-built binary).
The goal of this proposal is to make Embree also usable on GPU.
Bhyve is a type 2 hypervisor which supports guest virtual machines by coordinating calls for CPU, memory, disk, network and other resources through the physical host's operating system. It runs guest operating system inside a virtual machine (parameters like number of virtual CPUs, amount of guest memory and input output connectivity can be specified with command line parameter). My project will attempt to create a test harness for the instruction emulation code, perhaps using the open-source Intel XED tool, which should give confidence that the existing code is correct and provide a mechanism to develop future instruction emulation. My test harness enables automation of tests and ensures execution of tests, by using a test library and generates a report. I will also design a test script to handle different test scenarios and test data. All the instructions emulated by bhyve during execution will be verified and checked for correctness by Intel XED (x86 encoder-decoder) tool.
The Social API harmonizes authentication with external services in Drupal, providing an extensible module that allows integration of modules for user login, auto-posting, and any task that requires authentication with external providers. This main aim of the project is to integrate more implementers to Social Auth, Social Post Post, Social Widget and implement more functionality to them.
The aim of this project is to create reference implementations, primarily for the metrics defined by the Growth Maturity and Decline Working Group, but also for the other working groups. This will be done by analyzing the data retrieved by Perceval from various sources using jupyter notebooks, pandas and matplotlib.
There is no larger compendium of shared human knowledge and creativity than the Commons, including over 1.4 billion digital works available under CC tools. Creative Commons has released the “CC Search” project. Being able to access visualizations of all the indexed content is a good way for the community (and CC) to see how much data has been indexed and find and explore relationships between CC-licensed content on the web. The challenge is then, to create visualizations of all the data that is stored in the Creative Commons catalog (over 250 million works and growing) and show how they link to each other
This project aims to integrate Kudu (a columnar storage manager developed for the Apache Hadoop platform) into the Apache Gora project as a DataStore Backend. This project will pursuit one of the main objectives of Gora which is to support as many NoSQL databases as possible into its environment through the Object-to-Datastore Mapping concept. Moreover, the inclusion of Apache Kudu will open new opportunities and flexibility for Gora's users that need more specialized solutions and variety in development alternatives.
This project is to complete the web components that are in the latest FHIR release. Web components will be created in polymer 3.
KDE ISO Image Writer, which is part of the KDE neon project, is a tool to write ISO images into USB drives. It was forked from ROSA Image Writer and extended using the KDE Frameworks. Currently, the KDE neon website advises to use Rosa Image Writer because KDE ISO Image Writer is still under development.
The aim of this project is to revamp the user interface of the KDE ISO Image Writer following the designs made by the KDE community. I am also planning of packaging the application for various Linux distribution using their respective packaging system as well as Snap and Flatpak, for Windows and eventually for macOS. In addition, I would like to write documentation explaining how to use the application with the aim of including a tutorial showing how to write the KDE neon ISO images on a USB drive using KDE ISO Image Writer which could then be added to the KDE neon download page.
GNU social is a communication software used in federated/descentralized social networks. In order to archive such descentratization, some communication standards were created such as OStatus and ActivityPub.
Currently, the ActivityPub plugin has an unfinished implementation of HTTP Signatures, making GNU social unable to federate with other pieces of software using the same standard, and it doesn't use a queue system.
Furthermore, GNU social its client API, the Twitter-like API, needs some of its functionality reviewed, specially concerning third-party interfaces. The current implementation still requires Tools to use OAuth1 for authorization and it doesn't handle Bots the proper way.
In a first stage, this proposal aims to replace the existing network system with a more sophisticated one that will support HTTP Signatures and to move existing ActivityStreams implementations into a plugin to be used by OStatus and the ActivityPub plugin. The second and final part of the proposal is reserved for the migration of OAuth1 to OAuth2 and adding support to the ActivityPub C2S API.
GCC alone is unable to compile a single large file in parallel, causing parallelization bottlenecks in some projects such as GCC itself. Here we propose a fix for this issue by parallelizing the GIMPLE part of the compilation using threads.
JuliaAstro organization provides astronomers with the tools to work with the Julia language. One of the latest packages is AstroImages.jl, which aims to allow researchers to visualize astronomical images coming from FITS files. It is meant as an interface to popular Julia packages like Images.jl and Plots.jl This package is now in its infancy stage. The goal of this project will be to introduce new features and make it useful.
GOALS
The main goal of this project is to implement and evaluate an adaptive radix tree as an alternative to the current underlying data structure of PostgreSQL’s buffer manager. The authors of ART presented experiments that indicate that it is a promising data structure for in-memory indexing of common data types. It is also space efficient, supports fast lookup operation, provides data locality and preserves order. Such properties can be useful during data integrity ensuring process and relation or index removal. Moreover, not only it utilizes the CPU cache more optimally and avoids cache misses, but also opens new perspective directions in the area of IO optimizations, such as prefetching and write combining.
The aim of this project is to make P5.py ready for public use by completing the APIs to make it on par with Processing and P5.js. Examples and tutorials for the modules will be added to make it more accessible to the Python community. Apart from adding new APIs, I will also focus on fixing the existing issues in P5.py and add test suit to the library which will help in keeping the library stable as it grows in the future.
AirSim is an open-source, cross-platform simulator for drones and cars, built on Unreal 3D Engine. It provides physically and visually realistic simulations with popular flight controllers such as PX4 using either Software-In-The-Loop (SITL) or Hardware-In-The-Loop (HITL). It is generally used for testing software & for generating large amounts of visual data which is essential for tasks such as Deep Learning & Reinforcement Learning for autonomous drones & vehicles.
Over the next few months, in Google Summer of Code, I will add support for Airsim simulator for Ardupilot’s SITL and increase the scope and applicability of Ardupilot in today's emerging fields of autonomous vehicles. This will involve creating the required backend for the communication between Airsim & Ardupilot, implementing lock-step scheduling for accurate simulation and creating documentation, demo videos & sample programs for the same.
The aim of this project is to add support for HEIF/HEIC files in FFmpeg. High Efficiency Image File Format (HEIF) specifies the storage of individual images, image sequences and their metadata into a container file conforming to the ISO Base Media File Format (ISOBMFF). It can store twice as much information as in a JPEG image of the same size. This format has increasing usage in mobile devices.
The goal of this project is to implement and validate an optimal detector, based on Massey’s frame synchronization metric. At the end of the summer, the detector has to outdo the current implementation of the GNSS-SDR, a hard correlation scheme with a known scheme. The advantage of Massey’s frame synchronization metric over hard correlation is that it is the optimal way for detecting the sequence, as opposed to hard or soft correlation, which has no guarantees.
This project aims to add review tests and improve the current question framework. After this project, review tests will be shown after going through a few lessons. The tests results will be used to update skill mastery level, so that learners can review and practice according to the mastery level and suggestions. Tests results will also be sent to creators once a few months to help improve the questions and lessons.
Rucio is a data management system that provides the functionality to organize, manage and access a large amount of scientific data (in the order of petabytes) using customizable policies. Rucio also provides monitoring and data analytics. As a part of my GSoC program, I aim to create a Single-Sign On (SSO) authentication for CERN. SSO is a user authentication service which permits a user to use one set of login credentials to access multiple domains. It will eliminate the need to login to every CERN service that the user is registered to. The next task will be to develop a Collection following mechanism. The aim of this task is to develop a service which will inform the end-users of any event affecting the dataset. For eg. dataset deletion, file loss, change in metadata, lifetime expiration, etc.
Currently, pdftoraster uses the Poppler libraries which are unstable and change their function definition after updates due to which there are building and installation errors. The most plausible solution to overcome this problem is to instead of directly converting the output of pdftopdf to raster through pdftoraster, we can use the pdftoppm to first convert the pdf(output of pdftopdf) to jpeg/png/tiff page by page and then conversion of each page into raster using imagetoraster.
The aim of this project is to update the current version of Mozilla’s FixMe, a tool for surfacing meaningful contribution opportunities to new contributors
Forward thinking open source projects are adopting SPDXIDs in source files (initially U-Boot, but now much wider use like Zephyr, Linux Kernel, etc.) With these easy to find "SPDX-License-Identifier:" strings, generating an SPDX document for a project is a matter of iterating over the files in a project and extracting the information from these SPDXIDs and calculating checksums. Creating an open source tool to do this will aid these projects in generating accurate SBOM information at release time. This tool should be implemented as a command line, so it can be incorporated into builds, and options can be added. Goal is that projects that use SPDX identifiers can automatically generate a SPDX document as a Software Bill of Materials (SBOM) on demand (build, release, etc.).
Currently CritiqueBrainz’ users can review release groups, events, and places. This will be expanded on by implementing reviews for other MusicBrainz entities (works, artists, recordings, labels). This metadata will be retrieved from MusicBrainz through the mbdata module, and the CritiqueBrainz back-end and the front-end will be extended to support reviews for these entities.
Currently, writing wrappers to access code in the Android API is complicated, which makes writing add-ons or binding external modules to the platforms more difficult. For iOS it is a bit less of a pain, but requires Objective C Knowledge.
Both Objective C and Java provide introspection APIs, which would allow using the entire platform APIs via GDScript. It would make it much easier to directly support features of each platform that were not directly exposed in Godot, as it would allow calling functions and instantiating classes from the Java and Objective C world without actually needing glue code.
My proposal is to allow access to entire platform APIs via GDScript and also make writing addons and external modules easier with GDScript.
Bundler includes some basic information in the User-Agent header during HTTP requests to gem servers, but this information is insufficient and requires heavy, costly parsing to make use of. I'll develop the functionality to have Bundler report different kinds of usage metrics over HTTP and write the server that will accept and store those metrics.
Many professional Windows desktop applications are created with .NET technologies thanks to its close ties with its home native environment. Some software like ArcGIS Pro need to consume different APIs, in this scenario a WPS API written with C# will be needed to create an Add-In for the software, allowing its users to fetch complex geographical data from various web services providers.
Improve the efficiency of the catroid tests.
Currently AIMA is transferring from the 3rd to the 4th edition, thus some algorithms are removed, updated or added. To support the release of newest edition of AIMA, some of the original Python instances need to be altered to fit the changes in book. Currently the repo 'aima-Python' records all the algorithms in the 3rd edition but not updated for the 4th yet. What needs to be done is to check differences between the third edition and fourth edition pseudocode, provide python examples, test cases, demos and documents for the altered parts. If possible, some visualization methods such as animation will make the algorithms clearer.
Data visualization plays a crucial role in TVB's neuroinformatics platform, and a Structural Connectivity (connectome) is a core datatype, modelling full brain regions and their connections. It gives the main idea to manipulate the connectivity for better understanding in our research. Currently we have 2D visualizer which explain these connectivity. Here in this project we are going to implement 3D visualizer with refactoring the whole TVB’s front end from the UX design. Apart from this we are also going to stressed out on optimize these for extremely large data structure.
The project revolves around improving the performance of the Role Strategy Plugin, the most downloaded authorization plugin for Jenkins (which is not included in Jenkins itself). The plugin allows Jenkins’ administrators to assign permissions (called a “role”) to projects (and Jenkins nodes) whose names match the regular expression for that role. On adding a large number of roles to the plugin, a significant slowdown is visible in the Jenkins Web UI. The project aims to improve the end user experience by improving the performance of the plugin and creating a reliable performance testing framework which can be used in other Jenkins plugins.
Building a library that reads a large number of HDF files and builds a database. Add support for searching and loading multiple datasets, intelligently identifying dataset dependent parameters, visualize and analyze images to meet the requirements of scientists at AWAKE and port the existing analysis. Deploy the library for open source community.
The GA4GH work streams aim to provide an end-to-end solution for clinical data use cases. The cloud workstream provides solutions for paradigms that will be operated over the cloud. The sensitive nature of data in the personalized healthcare sector frequently requires their analysis on-site, i.e., analysis workflows need to be brought to the data. Hence requiring remote access (of workflows) as well as a computation on data.T he project aims to provide a PoC for a microservice that redistributes TES instances generated from the Workflow Execution Schema engine based on either the time or cost of each task.
Rain and fog are very common weather in actual life. However, it can affect the visibility. Especially in heavy rain, rain streaks from various directions accumulate and make the background scene misty, which will seriously influence the accuracy of many computer vision systems, including video surveillance, object detection and tracking in autonomous driving, etc. Therefore, it is an important task to remove the rain and fog, and recover the background from rain images. It can be used for image and video processing to make them clearer and it can be a preprocessing method for many computer vision systems. We propose to implement this technology in FFmpeg. For video, this proposal proposes to utilize the relationship between frames to remove rain and fog. For single image, we can use traditional methods, such as discriminative sparse coding, low rank representation and the Gaussian mixture model. And also some deep learning methods.
At this time, there's only a unified diff to show changes in gitg. Split view (original and modified file side by side) is a handy representation of a change, and allow too to show merges as three way diffs, modernizing the whole changes view in gitg.
Dart is a recent programming language. Originally proposed by a team at Google in 2010, its main purposes were to be a flexible, but structured language for the web. Its syntax was (and still is) very close to other languages (namely Java or C#) which makes it easy for newcomers to start using Dart.
Nowadays there is a new main purpose for the language: Flutter. A project that has been unveiled in 2015 and that utilizes Dart as its programming language. Currently developers use the project to deliver native cross platform mobile apps to both iOS and Android. In the future there will be other compile targets such as the web, or desktop applications.
The Dart toolkits still contain Dart2JS, a tool that can be used to transpile Dart code to Javascript code. The transpiled Javascript will run in any browser similar to Typescript.
The Eclipse IDE was once heavily used for Android development. But with the addition of a dedicated IDE for Android development (Android Studio) Eclipse is no longer the preferred choice for Android development as the tooling did not receive updates anymore.
The Greek Government Gazette text mining project or 3gm, is an open-source automatic codification project using Natural Language Processing techniques to create an automated codex of Greek Legislation. This proposal aims to enhance the current capabilities of 3gm by applying a series of Natural Language Processing enhancements and addressing a number of issues, such as ameliorating feature extraction mechanisms, archive improvement using Web Mining, implementig better Name Entity Recognition. To adress these issues, we will develop Python code and also train and deply various Machine Learning Algorithms
This project aims to implement an iterator over k- shortest source to destination paths using improved versions of Yen’s algorithm for simple paths and Eppstein’s algorithm for non-simple paths. Apart from working on these methods the scope of this project will be improving and cleaning the methods that are currently too slow in python as these methods will be a dependency for implementing the other algorithms.
The aim of this project is to add VoiceOver accessibility in the native Rocket.Chat iOS application. Minimum acceptable scope:
Primary purpose is to make Rocket.Chat for everyone, enhancing the user experience for users with low or no vision.
Secondary goal is to make the application one of the Popular Apps with VoiceOver on the App Store.
The main objective of McGill initiative in Computational Medicine (MiCM) is to deliver inter-disciplinary research programs and empower the use of Big Data in health research and health care delivery. One of the ways MiCM aims to achieve this objective is to strengthen collaborative research by sharing and transferring knowledge within the community.
In order to realize the potential of Computational Medicine at McGill University, there is a need to better connect researchers in life sciences and clinical domains with researchers and students in the data sciences (e.g., statistics, bioinformatics, medical informatics, computer science, epidemiology). The former has interesting datasets and questions, while the latter can apply or develop quantitative methods to look for solutions to these questions.
To facilitate this type of matchmaking, this project works on a database-driven, lightweight web application, with the purpose of matching McGill research data projects, with masters and doctoral students seeking looking for interesting projects to analyze.
The process of licensing a creative work can be very confusing. What license is right for me? How do I formally license my work? How do I help people attribute me when my work is used? The Creative Commons license chooser aims to answer these questions, but because of a lack of readability and informational clutter, the existing chooser tool falls short, and most likely creates more questions than it answers. My proposal aims to address these issues, and make the process of finding and using Creative Commons licenses easy.
Community-toolbox allows everybody to take a look at the activities going on in the PublicLab projects, helps in welcoming newcomers. It plays a major role in growing the community, newcomers can look for fto and help-wanted issues in order to get-started & other people are able to notice their contributions and help them. But, as the community is growing so fast, we need to be more active in community involvements in order keep welcoming newcomers with the least possible latency, quick reviews, noticing stale issues and providing motivation to newcomers along with ease of use so that user can even take a look at the page (cached) when he or she is offline.
In short, it includes adding tests, documentation, and features while ensuring the maintainability of code backed with high reliability.
Adding plotting engine into PerformanceAnalytics package.
AsterixDB currently attempts to perform an efficient join using a hybrid-hash-join and uses a nested-loop-join when hybrid-hash-join is not appropriate. If the data is already sorted, there may be cases where a merge join would be more efficient. This project will migrate an existing merge join from an outdated AsterixDB repository and will build a new query plan off of that to implement a parallel sort merge join for data across many partitions.
This project will analyze accountability and responsibility in an annotated dataset of newspaper articles on mass shooter events. The objective is to determine the best method of classification for detecting terms of accountability in these news texts. Experiments will be conducted to compare various classification models, from baseline linear classifiers to complex neural networks. AutoML approaches will be used to tune parameters and search for optimal neural architectures. Finally, time series analysis will be performed to analyze the changes in accountability over time.
OpenPub is a publication manager for individuals and research groups. OpenPub provides research's to upload and share their’s publications with others and get valuable feedbacks for their publications. It also allow users to easily find related resources in different areas and categories.
I aim to develop additional IPP tool test scripts for IPP errata including IPP Document Object v1.1, IPP Job Extensions v1.1, and IPP 3D Printing Extensions v1.1.
Scrapy currently uses Python's inbuilt RobotFileParser which is not fully compliant, but the more compliant alternatives are difficult to package and use within Scrapy’s pure-python development tree. This project is about introducing a new interface for robots.txt parsers in scrapy, allowing users of scrapy to substitute a different robots.txt parser. The stretch goal of the project is to create a pure python parser for robots.txt files.
In Indonesia, elections are highly anticipated, because it provides an opportunity for all people to influence the direction of their country. The results and voting patterns in an election are of great public interest, but often data is not easy to access or work with. This project aims to increase the accessibility of Indonesian election data, which would be particularly beneficial to researchers, journalists and NGOs. The main goal of my project is to explore the elections and census data from The Indonesian Central Bureau of Statistics (BPS). I aim to produce an equivalent of the eechidna R package for Indonesia. This will include results from the Indonesian presidential and vice presidential elections in 2004, 2009 and 2014, as well as demographic data. The data includes voting results for each polling booth and electoral division (electorate). I plan to demonstrate a few typical methods that can be used to explore this data.
The project is to provide a set of demo packages, including sample python code and user-friendly webpages to clinical researchers for the reproducibility of their results. This project will make The Virtual Brain easier adaptable to clinical researchers.
Adding a completely new Notification Panel for the Web app. Add some other features and more changes to the UI.
To make a convincing proposal in observational astronomy, you must demonstrate to a telescope’s proposal committee that your target can indeed be sufficiently observed with their instrument. How long could you observe a particular A-star with Hubble before it saturates the pixels? Will an M-dwarf at a distance of 327 parsecs have a high enough photon flux to be detectable with a certain filter and aperture? How many more photon counts can you expect when observing your object in the V band versus the U band? These questions are among those which telescopy will make easy to answer, thus aiding the thorough astronomer in their observational endeavors.
Tensorflow Datasets or tfds makes the work of the user easier by transforming the raw dataset into a standard format so that it can be immediately fed into the machine learning pipeline. This library handles the downloading of data, transforming it into a standard format, as well as preparing and constructing it as a tf.data.Dataset - so that building data pipelines and dividing records into training and testing splits is straightforward. No preprocessing is necessary from the user side. Each dataset is implemented as a subclass of DatasetBuilder which handles the dataset appropriately. There are a lot of good quality research datasets for which DatasetBuilders are still needed to be implemented.
This proposal is based on implementing key research datasets in tensorflow/datasets.
A Recycler View component would be created which can display media along with text
Coccinelle is a tool used to match and transform code in large codebases. This allows for a variety of uses, from finding and fixing potential bugs to updating usages of an API when said API is modified or becomes deprecated. This is coined as “collateral evolutions.” Coccinelle allows a developer to define these transformations with SmPL, a language closely related to git hunk patches. Coccinelle applies these “semantic patches” by implementing its own C parser.
However, the SmPL parser and the C parser both parse types in separate ways. This results in an incompatible internal representation. As a result, Coccinelle is unable to operate generally on complex C types involving constant qualifiers, attributes, and typedefs of arrays. My goal for this project is to update the SmPL and C language parser to represent types more uniformly, and allow coccinelle to operate on complex types.
Apache Fineract CN is currently using HIbernate as it Object-Relational Mapping . However, Hibernate's license is not compliant with the Apache license. Therefore this project seeks to remove all Hibernate specific code and dependencies and replace them with core Java and Apache OpenJPA equivalents to make Fineract fully Apache compliant.
This project I will work on during the GSOC consists on building many new Lab challenges for web app pentesting and clear and easy-to-follow write-ups for learning with them. I will also create a nice guide for learners which summarizes all the labs and leads anyone through the process of learning and getting hands-on hacking web applications. This will help and ease anyone in getting to know all the vulnerabilities. In what referes to the lab challenges, I will work both on adding new challenges with new vulnerabilities and also extending the already existing ones with new difficulty levels in order to dig deeper into each topic. Additionally, I will implement a hints providing system during the lab challenges and improve all the current code.
VideoCutTool will help to trim videos on-the-fly in Wikimedia Commons. Currently, a video in Wikimedia Commons cannot be edited online. They have to be downloaded, changes made and later re-uploaded. As this process tends to take a good amount of time, the VideoCutTool makes authorized users work hopefully up-to 10x faster. This tool will be deployed on Wikimedia Toolforge, a hosting environment that provides services for the Wikimedia movement.
PathwayMapper provides a pathway visualization environment which biologists are mostly familiar with. Currently, it works as a standalone application online but integrating it into cBioPortal would be quite beneficial for cBioPortal users. In the first sub-project, including a read-only version of PathwayMapper into cBioPortal is aimed.
NetworkView is a gene network visualization tool that is embedded into cBioPortal. However, it does not get along well with the cBioPortal codebase. Hence, in this second sub-project, the aim is to re-architect NetworkView module to make it compatible with cBioPortal codebase.
Creation of an application within XWiki that will allow users to generate interactive maps which support collaboration and are easy to create so that locations can be shared, and areas can be associated with structured data.
Finite state automata/ transducers are currently used in lots of application including machine translation. One of the most challenging parts of developing models based on transducers is how to weight the edges so that finally a certain path is favored over the other.
The obvious technique is to build a manually annotated corpus, use it to estimate probabilities and then apply these estimates to the edges as weights.However, building a large annotated corpus is in most cases a tiresome job. Therefore, the project aims at generating these weights using only a set of raw corpora based on unsupervised techniques.
The support for Python 2 will be dropped in the coming years and thus it is important to switch to Python 3. There is a need to make sugar-toolkit and activities compatible with Python 3, to ensure the smooth functioning of the sugar activities in the future. The aim of this project is to boost this necessity.
Following are the targets of this project:-
Inevitably new and prospective Haskell users will attempt to obtain editor/IDE integration for Haskell but unfortunately they are much more likely to fail than to succeed due to the less than ideal state of the tools.
Haskell downstream tooling is simply in a bit of a bad place at the moment. Things have always moved fast in GHC land but recently things got a lot worse for tools when the release frequency increased from every-two-years to every-six-months.
As if that weren't enough, Cabal -- the main Haskell build tool -- is almost ready to switch over to the long awaited Nix-style new-build commands. This switch brings with it a major change to how tooling has to interact with the build system.
Bad tooling is a major source of frustration for new and experienced Haskell users alike. This proposal will substantially improve the reliability, performance and maintainability of tooling efforts.
This proposal consists of three main areas:
Alga is a library for algebraic construction and manipulation of graphs in Haskell. Currently there are two main goals, as proposed by Mr. Andrey Mokhov, which I would like to work on during the course of the internship. The first goal is to create a proper representation for acyclic graphs and the second goal is to add more graph algorithms to the library.
imjs and im-tables, written in CoffeeScript are client-side libraries for querying mine instances and displaying data in tabular format respectively, which help in reducing boilerplate code for developers, looking to work with InterMine's services. The aim of this project would be overhauling these libraries, by upgrading their dependencies (last updated in 2015) and adding easily extensible mock responses (in imjs). Moreover, the test suite will be broken up into unit and integration tests. Support for querying with the Registry class will be added. Docs for both libraries will be added (adding end-user docs for imjs and developer docs for im-tables). Build system of both the libraries will be investigated, and the possibility of updating it will be discussed and implemented (possibly in the GSoC period itself, if feasible as per investigation). High priority issues, like adding support for setConstraintLogic() in imjs will also be resolved.
A game to introduce female teenagers to Catrobat's programming learning app. The game's users will have the opportunity to learn how to code with the app, and how easy, fun and accessible that can be!
Many OSM contributors are guided by mapping tasks, such as HOTOSM tasks for disaster relief and humanitarian projects. Even though the iD editor is the default editor for these tools, loose integration means that users are often confused about the task they're working on, and cannot update task details directly within iD. This project aims to integrate HOT Task Manager task functionality directly into iD so that users can lock, update the status, and comment on tasks without leaving the editor. If successful, further functionality will be added such as the ability to search for tasks and work on suggested ones.
Kea is currently able to report dozens of statistics. However, for each statistic there is only one specific value being reported. For certain types of activities it is highly desirable to have multiple observations over time. Having many data points gives an insight into processes that are changing over time, e.g. daily patterns in user activities, DOS detection and mitigation etc. Therefore the primary task of that issue is expanding the method of collecting observations in Kea by adding the possibility to collecting statistics as time-series ‘buckets’.
The aim of the project is to improve the vega editor by adding support for settings panel , and improving upon features like uploading datasets and sharing visaualizations
The Discussion Forum of Submitty is an important and mature feature, allowing the students to communicate about assignments and homeworks. The Forum currently allows discussion in categories and for instructors, it facilitates the option of making announcements. It also offers the feature of sending emails to the class of the thread. It is however very basic in the features it offers - currently presenting the core essential features for a forum.
As mentioned in the Project Ideas of Submitty, my proposal covers -
I would further like to add to the Discussion form the following important features -
The features mentioned by Submitty can be achieved in hand with the features mentioned by me which will lead to rapid growth for the applications.
This project aims to add Kernel Address Sanitizer functionality to coreboot for x86 architecture. An option will be added to Kconfig to compile the coreboot with KASAN. KASAN functionality will help to reduce memory related errors in coreboot. Adding KASAN to coreboot will ensure code quality and make the code more robust.
Add support for reading the DICOM image file. DICOM files (Digital Imaging and Communications in Medicine) are used to store medical imaging information and related data. The DICOM standard is copyright under NEMA, and it is known as NEMA standard PS3, and also as ISO standard 12052:2017.
The proposed tool aims to collect, process, and analyze data from various Bug Tracking Systems. The outcomes by this processed data are easily readable reports in various formats, like PDF, DOC, and CSV files, that contain release notes and issue trackers based on detected bugs. The platform can generate these reports automatically, on-demand by the actor, or even when an event is triggered and detected by the platform.
This project involves the creation and refinement of a tool to create plots, using the Data & Analytics Framework APIs or graphql based on DAF data. This will be accomplished through python and its vast set of data science tools and libraries, such as matplotlib and mpld3, that have become an established standard in the data science field in recent years.
Poor Man’s Rekognition (PMR) is aimed to provide an Open Source alternative to the paid Amazon Rekognition API. This summer, I aim to initiate PMR project providing complete implementation of facial analytics and recognition as a Node.js library. A REST API will also be developed as demonstrated in my Proof of Concept. Node.js library will be a binding for C++ backed Machine Learning (ML) algorithms. OpenFace 2.0 C++ library will be used for face detection and analytics. Tensorflow will be used to implement ArcFace for face recognition. It will be converted to Tensorflowjs model and included in PMR Node.js library. Finally, support for videos will be added exploiting a Multi-Object Tracking algorithm for efficient face attribute and person annotation.
Porting and Analysis of top solution algorithms from the TrackML challenge to ACTS framework. The algorithms include the combinatorial Mikado tracker, Cloudkitchen's Neural Network and DAG based tracker and top-quarks' logistic regression and outlier density estimation algorithm.
This project would add the functionality of newsfeed, which is similar to a customized read-only channel for every user. Every user (called follower) can follow other users (called origin user). Whenever the origin user posts something in a public channel (called origin channel) or does any public activity, his posts would appear in the newsfeed of their followers. Their follower can or cannot be a part of the origin channel, but this post does come up in their newsfeed with a link to the original post in the origin channel.
This would greatly help in discovering new conversations and would increase accountability of persons in the organisation. This idea would be an asset to Rocket.Chat as it would directly increase many folds the time a user spends on Rocket.Chat.
Key Features:
For Eclipse platform development it is recommended to use latest nightly SDK build. Ideally every morning platform committer should have a new environment based on the build from yesterday. Because of the overhead this is not practical and usually one updates SDK once in a week, or in a month. This is often too late, therefore the process automation is a key here. Same automation could be also used by the new platform contributors starting from scratch.
Currently Eclipse have Oomph installer which is capable to automate some of the required steps, but the general problems here with Oomph are the complexity of the tool, the way how the installation is executed and maintained.
The idea is to develop a way to save the current state of the development environment as a file (“Eclipse snapshot”). The end user must be able to recreate the saved state of the development environment simply by specifying the location of the saved snapshot file at Eclipse start-up. The rest should be done (almost) automatically, reducing the effort needed to the setup of the new working environment to one-two clicks.
StatsD is simple, text-based UDP protocol for receiving monitoring data of applications in architecture client-server. As of right now, there is no StatsD implementation for PCP available, other then this which is not suitable for production environment.
Goal of this project is to write PMDA agent for PCP in C, that would receive StatsD UDP packets and then aggregate and transfer handled data to PCP. There would be 3 basic types of metrics: counter, duration and gauge. Agent is to be build with modular architecture in mind with an option of changing implementation of both aggregator and parsers, which will allow to accurately describe differences between approaches to aggregation and text protocol parsing. Since the PMDA API is based on around callbacks the design has to be multithreaded.
Agent is to be fully configurable with either PCP configuration options and/or separate configuration file. Writing integration tests is also in the scope of the project.
The project proposes idea to implement peephole optimization in NetLogo Compiler codebase and perform cross-platform compiler optimization on both platforms Desktop (NLD) and Web (NLW) thus benefiting both the projects.
The objective of this project is to Improve the behavior mechanism in Terasology in order to allow collective behaviors among multiple actors. Existing research on cooperative planning mechanisms for intelligent agents will be used in the implementation.
Local Phone App, known as Meshenger, is an Android app which allows voice and video-communication without any server or Internet access and works in a local network. Despite this, the app lacks some vital functionalities which are hampering its performance. This project aims at enhancing the Local Phone App to a whole new level. The first aim is to make the possibility of calling contacts over the Internet so as to increase the versatility and user-base of the app. The second aim is to implement secure authentication during the initial handshaking. The third aim is to enable communication by chat and to show own camera feed in a small window in the video-call screen. Lastly, the app will be polished, issues will be fixed and new bugs will be discovered to fix.
This proposal’s main goal is to add a concrete and fully functional form of implementation/procedure for processing of various messages and taking the required action. These messages are in turn extracted from the bounce messages which in turn are received to the sender when the email is not received by the receiver.
The aim is to add new feature of simulating GPR models based on OpenCl which willrun on heterogeneous computing units. PyOpenCl, a Python wrapper over OpenCl is used to integrate gprMax with Kernel functions that are meant to be executed on computing unit.
The Webpack Dev Server is an excellent tool for iterating and recompiling quickly while working on Webpack projects. It is designed to create a good development experience for users, but some issues with uniformity and functionality can make it difficult or confusing to use. On top of this, a few of the early design choices for the project need to be improved upon. This proposal aims to solve these issues by replacing SockJS on the Dev Server with the ws module and native WebSockets, and by building uniformity between the CLI and the API. The project will expand slightly beyond the Dev Server to tackle other issues and expand the Dev Server's usability and functionality by implementing the Dev Server CLI fully in webpack-cli and improving core features of Webpack that the Dev Server relies on.These changes will ultimately make the Dev Server easier to use, read the source code of, and contribute to.
Libguestfs provides a set of tools for communicating with virtual machine disk images. By using this, you can view or edit a file on almost any disk images. Anyone can use this functionality through a scriptable shell called ‘guestfish,' or an interactive rescue shell 'virt-rescue.’
Libguestfs also provides a C library, which allows you to create applications involving communications with VM disks efficiently. Bindings of the library are available in many languages such as Python, Java, OCaml, Perl, Ruby. However, the implementation in Rust does not exist yet. Rust is a rising programming language that has been used more and more in every situation including system’s programming thanks to its advantages: strong type system, memory safety, safe concurrency and yet availability for low-level high-performance programming.
This is a proposal to implement stable and comprehensive bindings for Rust so that Rust developers and projects come to be able to use libguestfs easily and safely.
This project will provide an easier way to set properties for various OGC services. It will do so by development of an REST API & a UI to go along with the API. This API will make it easy to generate a settings.json file for corresponding OGC services. It will do so by providing POST, GET, PUT & DELETE endpoints that will help in creating, fetching, updating & deleting of settings. The UI part will be written in ReactJS as it allows to create comprehensive SPA experience for the user across all platforms.
A project on creating a language tool (machine translator) for a new language pair
Developing a new data format to unify the needs of the different experiments that will make use of it and improvement the data visualisation of the application adding new functionality to its GUI as well as adapting the existing application to this new file format developed.
My proposal regarding the API Design Tool project
Spidermon is our spider monitoring tool. Currently, user can choose between two libraries for item validation rules: jsonschema and schematics. We want to provide a third option that being Cerberus.
The goal of this project is to include Cerberus as a new option for item validation available for the user.
Implementing the Reinforcement Learning Variant ‘NeuroEvolution of Augmented Topologies’ (NEAT) and follow-up research such as ‘HyperNEAT’ in Tensorflow and integrating it into the Tensorflow ecosystem. This project can build upon the published efforts of the projects ‘Tensorflow-NEAT’ and NEAT-Python.
The project I prefer to do is “Implement EXCEPT ALL and INTERSECT ALL operations” from idea list. I choose this because in my perspective this project is suit for my capacity and can give me a chance to both take part in developing a open source project and know database system more deep going.
This project involves automatically building docs for Pull Requests in GitHub repositories that Read the Docs users have. This will include modeling pull requests as versions, building an integration with Github, storing the generated docs in long term storage, Deleting pull request Docs when merged or after a certain period of time and many other possibilities that might arise while working on this.
For this project, I would like to create a new CGAL package that combines the global regularization algorithm from Section 3 of the KIPPI paper, “Kinetic Polygonal Partitioning of Images” by Jean-Phillipe Bauchet and Florent Lafarge and all other regularization techniques that are already available in CGAL. The goal of this project is to create a generic API for the global regularization algorithm, given a set of arbitrary input items, a connectivity among them, and user-defined regularization conditions. The 2D regularization technique described in KIPPI becomes a particular instance of this generic API with the items being 2D lines/segments, the connectivity being a Delaunay triangulation of these items, and the regularization conditions described in the paper. As required, I will write and include all necessary documentation, examples, and tests. This proposal describes the project requirements, anticipated constraints, and the development schedule. I have also included personal information in regards to my academic background and my commitment to CGAL.
Natural Language Generation is the process of generating coherent natural language text from non-linguistic data. Though the community has been generally going for speech and text output for these models, there has been far less certainty in the inputs. A large number of inputs have been taken for NLG systems including images, numeric data, semantic representations and Semantic Web (SW) data. Presently, the generation of Natural Language from SW, more precisely RDF data, has gained substantial attention and has also been proved to support the creation of NLG benchmarks. However, most models are aimed at generating coherent sentences in English, whilst other languages have enjoyed comparatively less attention from researchers. RDF data is usually in the form of triples, <subject, predicate, object>. Subject denotes the resource, predicate denotes traits or aspects of the resource and expresses the relationship between subject and object.
In this project we aim to create a multilingual Neural verbalizer, ie, generating high-quality natural-language text from sets of RDF triples in multiple languages using one stand-alone, end-to-end trainable model.
codeceptjs-resemblehelper is a CodeceptJS helper which integrates the resemble.js functionality of image comparison in tests used to compare images/screenshots and pass/fail tests based on the tolerance level provided.
Use EchartsJS to rewrite NetjsongraphJS
Meaningful Adversarial Examples for Natural Language Models
A project to create adversarial examples and resulting counterfactuals for text classifiers using the relations in the word embedding vector space. This allows for meaningful alterations to be made in the input documents to test a model for biases. What would happen if the subject of this document was female instead of male? If it was a person of color instead of being white? How would a state of the art model change its results when such changes are made?
This project means to address these questions by creating a framework that allows for the testing against such biases as well as the creation of augmented datasets to dissuade their development.
The Android Graphics Tools Team's GraphicsFuzz is a metamorphic testing framework for OpenGL and Vulkan graphics drivers. Among its tools are a shader generator, which mutates a shader program into variants that differ in machine code, but render a similar image to the original - and a shader reducer, which reduces shader code to focus on specific logic that a user is interested in. Used in tandem, these tools allow one to fuzz test shader compilers for flaws and easily reproduce them.
The generator mutates a shader by performing "transformations" on the original code which complicate control/data flow. These transformations often make use of OpenGL's built-in functions and features to mutate expressions into equivalent forms.
However, the generator and reducer are only aware of a subset of the OpenGL shader language, and do not support all of OpenGL's built-in math and transformation functions. This reduces the number of potential mutations and ultimately hampers GraphicsFuzz's capability of finding edge cases.
This project aims to enhance GraphicsFuzz's support for OpenGL semantics and extend its capabilities with new transformations, then apply them to open-source drivers.
The Beacon specification allows to query whether a genomic variation is present in a repository. But the current implementation is based on an outdated version of this specification. Many new parameters that are useful for filtering down the data which can improve the querying experience are available according to the latest specification. To improve this querying experience and search results, I would be updating the existing implementation, so that it matches with the current specification.
Also, current variation search APIs retrieves all the fields together. This is a lot of information at once and is cluttered. So, I would be redesigning these APIs and would make it closer to RESTful principles.
Now, variation search by 'geneId' is not efficient. To deal with this issue, I would build a data pipeline that would load mappings from a 'geneId' to its coordinates. These coordinates would then be used to search for variations.
I would be also be adopting the models defined by VMC which would enable standardized exchange standardized exchange of genomic data.
Based on the work done, feed backs will be provided to GA4GH workstreams.